Tuesday, May 16, 2017

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

scott@slp53.sl.home (Scott Lurndal): May 16 12:38PM

> flush the instruction pipeline), statements that branch
> out of a loop can have..."
 
>When did the JMP instruction stop forcing a refill of the cache?
 
"Flush the instruction pipeline" != "refill the cache".
 
At any point in time, there may be a dozen or more instructions
_currently being executed_ in various stages of the processor
pipeline. In addition, the processor will use a branch predictor
to select the direction of a conditional branch in order to keep
the pipeline full. If the choice was poor, instructions speculatively
fetched and executed will need to be discarded from the pipeline, causing
a pipeline stall. This has nothing to do with the cache, but it
does have an effect on performance.
 
The advice given in the book above is incorrect in that it doesn't
account for the branch predictors, which are quite good in modern
processors (see, e.g. TAGE).
scott@slp53.sl.home (Scott Lurndal): May 16 12:40PM


>How does the CPU synchronize instructions which have been pre-fetched
>from now stale instruction data for an upcoming instruction that's
>already begun decoding for its pipeline, to then later signal without
 
All x86 processors (intel, amd) snoop the L1 cache, and if the
line changes, the pipeline is flushed.
 
Self-modifying code should be avoided on all processors, under all
circumstances (I'm not counting JIT in as self-modifying in this context).
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 16 06:09AM -0700

On Tuesday, May 16, 2017 at 8:38:13 AM UTC-4, Scott Lurndal wrote:
> > out of a loop can have..."
 
> >When did the JMP instruction stop forcing a refill of the cache?
 
> "Flush the instruction pipeline" != "refill the cache".
 
I realize that. I said in a later message that I used the wrong
words in these cases. I apologize for the confusion. I meant
instruction pipeline at all points, not instruction cache.
 
Thank you,
Rick C. Hodgin
qak <q3k@mts.net.NOSPAM>: May 16 01:15PM

"Rick C. Hodgin" <rick.c.hodgin@gmail.com> wrote in
> unit mis-predicted the branch. By forcing a hard branch, it will
> then re-fill the instruction code in the pipeline, which will read
> the recently altered self-modifying code (if it existed).
 
Thanks for your thought.
I'm glad "it's not uncommon", so compiler writer must know about them.
scott@slp53.sl.home (Scott Lurndal): May 16 01:21PM

>> the recently altered self-modifying code (if it existed).
 
>Thanks for your thought.
>I'm glad "it's not uncommon", so compiler writer must know about them.
 
Fortunately, a JMP instruction will _not_ flush the pipeline, since
has a 100% prediction rate.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: May 16 06:26AM -0700

On Tuesday, May 16, 2017 at 8:40:15 AM UTC-4, Scott Lurndal wrote:
> >already begun decoding for its pipeline, to then later signal without
> All x86 processors (intel, amd) snoop the L1 cache, and if the
> line changes, the pipeline is flushed.
 
I am aware that all processors snoop data writes and update the L1
instruction cache, but I am not aware that they will automatically
flush the pipeline if they detect a write to an address that's
already been decoded and is in the pipe.
 
 
I don't see in the IA-32/Intel64 architecture manual where it says
the pipeline will be flushed if it detects changes in the L1
instruction cache:
 
https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf
 
I would appreciate a reference to where this happens, because to my
current knowledge it does not automatically happen.
 
> Self-modifying code should be avoided on all processors, under all
> circumstances (I'm not counting JIT in as self-modifying in this
> context).
 
I have understood the reason why applications should avoid SMC is
because CPUs will not automatically flush the pipeline when it detects
changes. The L1 instruction cache will be updated, but that will
only affect the next pass through the code when a new set of load-
and-decode operations takes place on those opcode bytes.
 
Thank you,
Rick C. Hodgin
fir <profesor.fir@gmail.com>: May 15 04:36PM -0700

W dniu wtorek, 16 maja 2017 01:18:43 UTC+2 użytkownik fir napisał:
 
> question is if gpu efficiency is so much mrmory bandwidth bound as on cpu - i suspect it may be maybe more asm-bound there and that could mean that programming in asm may be more profitable there, but also not neccessary - it may be bound to code organisation which you could also make in gpu c, then asm would be not so important, yet other side, knowing the details of given gpu (which is related to sam) may be very important that could meen again asm is important - otherwise time of the good coders is so expensive so people would like to buy more hardware instead and so on.. it all needs a lot of expertise to answer ;c
 
> i am prersonally for both sides at onece i love both asm and c (c is really good and has so good improvement over assembly but assembly is also very good)
 
> (yawn)
 
one could eventually ask a question if that would be good to have such big gap
between efficiency obtainable in asm and efficiency obtainable in c (such big as say 5x or 10x) or it would be good to have such gap as small as 10% or even zero
 
i cannot answer that.. it maybe depends if this asm obtainable efficiency would be additional speedup or if that asm obtainalbe efficiency would be just based normal one ;c
"Chris M. Thomasson" <invalid@invalid.invalid>: May 15 04:45PM -0700

On 5/15/2017 4:18 PM, fir wrote:
 
>> http://webpages.charter.net/appcore/fractal/webglx/ct_complex_field.html
 
>> Try to get asm on cpu to get similar performance characteristics.
 
> why i could try to get the same performance on 10 times weaker hardware? ;c
[...]
 
Well, GPU happens to work very well with regard to embarrassingly
parallel algorithms. Mandelbrot style is that category. CPU, no.
fir <profesor.fir@gmail.com>: May 15 04:58PM -0700

W dniu wtorek, 16 maja 2017 01:45:22 UTC+2 użytkownik Chris M. Thomasson napisał:
> [...]
 
> Well, GPU happens to work very well with regard to embarrassingly
> parallel algorithms. Mandelbrot style is that category. CPU, no.
 
this goes out of topic, topic was mor on programming in assembly imo,
im goin to sleep
 
(personally i got no much time for assembly last seasons but i got respect to it and i plan to refresh it /especially as i code my own x86 assembler right now)
Lynn McGuire <lynnmcguire5@gmail.com>: May 15 10:36PM -0500

On 5/8/2017 5:33 PM, jacobnavia wrote:
> general principle of programming languages... I have some doubts about
> its scope.
 
> :-)
 
The last time I wrote a full program in Assembler was CS 204 - IBM 370
Assembler in 1979. I used to have to hand convert some of our Fortran
code to Assembler back in the late 1980s using the first NDP compiler
for the 386 since that compiler had jump near / medium / far issues.
Lately, I have hand coded a few items in Assembler in our C++ code for
security obfuscation. All terrible Assembler but it works.
 
Lynn
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 16 06:20AM +0200

On 16-May-17 5:36 AM, Lynn McGuire wrote:
> since that compiler had jump near / medium / far issues. Lately, I
> have hand coded a few items in Assembler in our C++ code for
> security obfuscation. All terrible Assembler but it works.
 
Hey, take a look at <url: http://www.bbc.com/news/technology-39803425>
 
Quote:
 
> "This is the ultimate 'geek' dream assignment," said Doug Rohn, head
> of Nasa's transformative aeronautics concepts program that makes
> heavy use of the FUN3D code.
 
 
Well you have to be a US citizen to participate.
 
And I'm not. :(
 
 
Cheers!,
 
- Alf
"Chris M. Thomasson" <invalid@invalid.invalid>: May 15 09:31PM -0700

On 5/15/2017 8:36 PM, Lynn McGuire wrote:
> for the 386 since that compiler had jump near / medium / far issues.
> Lately, I have hand coded a few items in Assembler in our C++ code for
> security obfuscation. All terrible Assembler but it works.
 
Last time I wrote asm for some real work was back when I did not trust
the compiler to handle sensitive non-blocking algorithms. Here is a
taste of the pain:
 
http://webpages.charter.net/appcore/appcore/src/cpu/i686/ac_i686_masm_asm.html
 
http://webpages.charter.net/appcore/vzoom/refcount/refcount-ia32-masm.asm
 
This was for masm. I have versions for gas as well:
 
http://webpages.charter.net/appcore/vzoom/refcount/refcount-ia32-gcc.asm
 
http://webpages.charter.net/appcore/appcore/src/cpu/i686/ac_i686_gcc_asm.html
 
I love the new C/C++!
scott@slp53.sl.home (Scott Lurndal): May 16 12:42PM


>> Christian
 
>You won't find these programs on github. And anything you find on
>github in this area is not used by serious forecasters.
 
Most of the codes are available directly from NASA, for free. It is
pretty straightforward to download and build most of the models. The
vast majority of which are written in FORTRAN and use one of the parallelization
features such as openMP or are in C/C++ and use CUDA.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: May 16 01:19PM +0100

On Mon, 15 May 2017 23:09:01 +0200
 
> Please report it. If this is Visual C++ 2015 or earlier then you can
> use Microsoft Connect. I think there's a new error reporting site for
> Visual C++ 2017.
 
To the OP: if you were to _want_ the cast to work on other compilers,
you can use a C cast when initializing your Test class with the
dynamically allocated Derived type. It is pretty bad form (it breaks
the intended encapsulation), but it is valid.
 
It is an interesting thing that traversing a private inheritance graph
with the correct pointer offsetting (that is, as if by a static_cast) is
one of the few things that can only be done in C++ with a C cast, and
cannot be done with a static_cast (inaccessible base) nor
reinterpret_cast (undefined behavior except with standard layout
types) - see §5.4/4 of C++14 in describing the behaviour of C casts:
"The same semantic restrictions and behaviors apply, with the exception
that in performing a static_cast in the following situations the
conversion is valid even if the base class is inaccessible: ...".
 
Chris
Adam Badura <adam.f.badura@gmail.com>: May 15 11:48PM -0700

Let's consider simple example like this:
-----
constexpr auto few_first_primes = {2, 3, 5, 7, 11, 13, 17, 19};
-----
auto deduces here std::initializer_list as expected. What is not expected however is its value type. It is not int but const int!
 
Following code confirms that:
-----
#include <initializer_list>
#include <type_traits>
 
constexpr auto few_first_primes = {2, 3, 5, 7, 11, 13, 17, 19};
 
static_assert(
std::is_same<
decltype(few_first_primes),
std::initializer_list<int const> const
>::value,
"initializer_list holds const ints!"
);
-----
 
Now when you think about it that seems logical. After all those values are constants, right?
 
The problem is however, that this makes the few_first_primes value far less usable! In particular, you can no longer use it with standard containers to initialize them or add values. Below code:
-----
std::vector<int> some_primes(few_first_primes);
-----
fails to compile with error:
-----
Test.cpp:15:49: error: could not convert '{few_first_primes}' from '<brace-enclosed initializer list>' to 'std::vector<int>'
std::vector<int> some_primes = {few_first_primes};
-----
using "{few_first_primes}" or "= {few_first_primes}" initialization instead of "(few_first_primes)" fails to compile as well, however case above shows best error description (for the needs of this discussion).
 
Also, we get similar error with other functions taking the initializer_list:
-----
std::vector<int> some_primes;
some_primes.insert(some_primes.end(), few_first_primes);
-----
 
We can still use iterator-based approach:
-----
std::vector<int> some_primes {few_first_primes.begin(), few_first_primes.end()};
-----
but this doesn't seem as nice.
 
The problem seems to be that standard containers use initializer_list with value type fixed to their own value type. And since int and const int are not the same type we have a problem here.
 
Why aren't the standard containers more tolerant for the initializer_list value type, just as they are with iterators?
 
Code samples and error message are based on g++ (GCC) 6.3.0 with compilation flags being "-Wall -Wextra -pedantic -std=c++14".
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 16 08:56AM +0200

On 16-May-17 8:48 AM, Adam Badura wrote:
> -----
> auto deduces here std::initializer_list as expected. What is not
> expected however is its value type. It is not int but const int!
 
That's not unexpected: it's required (for a `std::initializer_list`).
 
And yes it's a sometimes annoying limitation.
 
Still initializer lists are great. :)
 
 
Cheers!,
 
- Alf
Adam Badura <adam.f.badura@gmail.com>: May 16 12:22AM -0700

> > expected however is its value type. It is not int but const int!
 
> That's not unexpected: it's required (for a `std::initializer_list`).
 
> And yes it's a sometimes annoying limitation.
 
So what prevents the standard containers from being more flexible with the accepted initializer_list types? Just as it is flexible with iterators. Why not change specification in this area?
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 16 10:12AM +0200

On 16-May-17 9:22 AM, Adam Badura wrote:
 
> So what prevents the standard containers from being more flexible
> with the accepted initializer_list types? Just as it is flexible with
> iterators. Why not change specification in this area?
 
I agree that it would be nice with possible non-`const` items of
initializer lists, in particular so that items of class type could just
be `move`d instead of actually copied.
 
Regarding the rationale for the current spec, the `const`-ness, I'm not
sure.
 
It could be that if or when someone points that out we'll say "Aha! It
couldn't very well be otherwise, lest one would incur unreasonable
complexity and violate the principle of least surprise."
 
 
Cheers!,
 
- Alf
Adam Badura <adam.f.badura@gmail.com>: May 16 01:14AM -0700


> It could be that if or when someone points that out we'll say "Aha! It
> couldn't very well be otherwise, lest one would incur unreasonable
> complexity and violate the principle of least surprise."
 
OK, so where and how to "point that out"?
Adam Badura <adam.f.badura@gmail.com>: May 15 10:46PM -0700

According to what Google shows one the newsgroup comp.lang.c++.moderated the last post is almost a year old now (it is from 23th May 2016). (https://groups.google.com/forum/#!forum/comp.lang.c++.moderated)
 
I wrote a post myself almost a week ago and it wasn't published so far.
 
Is the comp.lang.c++.moderated officially closed now? Is there any replacement for it (other than this comp.lang.c++)?
"Chris M. Thomasson" <invalid@invalid.invalid>: May 15 10:52PM -0700

On 5/15/2017 10:46 PM, Adam Badura wrote:
> According to what Google shows one the newsgroup comp.lang.c++.moderated the last post is almost a year old now (it is from 23th May 2016). (https://groups.google.com/forum/#!forum/comp.lang.c++.moderated)
 
> I wrote a post myself almost a week ago and it wasn't published so far.
 
> Is the comp.lang.c++.moderated officially closed now? Is there any replacement for it (other than this comp.lang.c++)?
 
That is terrible news.
Ian Collins <ian-news@hotmail.com>: May 16 06:07PM +1200

On 05/16/17 05:46 PM, Adam Badura wrote:
 
> Is the comp.lang.c++.moderated officially closed now? Is there any
> replacement for it (other than this comp.lang.c++)?
 
In as much as any Usenet group can close, yes. The moderated group has
been dead for quite a while.
 
--
Ian
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 16 09:07AM +0200

On 16-May-17 7:46 AM, Adam Badura wrote:
 
> I wrote a post myself almost a week ago and it wasn't published so
> far.
 
> Is the comp.lang.c++.moderated officially closed now?
 
Alas.
 
Victor Bazarov, the last moderator holding fort, posted about it in this
group.
 
The free duct-tape-and-chewing-gum-based e-mail infrastructure for the
moderation process, adopted after some University retired the original
moderation servers, just stopped working.
 
 
> Is there any replacement for it (other than this comp.lang.c++)?
 
Not really, no.
 
But if you look over at the ISO C++ pages there is a link there
somewhere to a Google group that corresponds more or less to old
moderated comp.std.c++ (the clc++m sister group), except that you can't
file a DR by posting there, as you could in the original comp.std.c++.
 
And maybe we could create such a Google group corresponding to clc++m,
but I'm sorry, I don't have the capacity to front such effort.
 
 
On behalf of the clc++m moderators,
 
- Alf
Juha Nieminen <nospam@thanks.invalid>: May 16 06:08AM

> using namespace std;
 
Haven't we discussed this already?
 
That line doesn't make the code more readable. In fact, it does the
exact opposite.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 16 08:53AM +0200

On 16-May-17 8:08 AM, Juha Nieminen wrote:
> Alf P. Steinbach <alf.p.steinbach+usenet@gmail.com> wrote:
>> using namespace std;
 
> Haven't we discussed this already?
 
No, I can't say I remember that.
 
 
> That line doesn't make the code more readable. In fact, it does the
> exact opposite.
 
Works fine for me.
 
You seem to have a silly hangup about it. Speculation about why:
possibly you prefer absolute mechanical rules for programming, instead
of intelligence. I know many do. And this thing about not using the
quoted statement can work for beginners. Teaches them things and avoids
some problems. Plus it gives some of them a sense of group membership,
usually in fan-boy groups of some sort. But if absolute mechanical rules
worked well for programmers in general (it doesn't), not just beginners,
then you'd have been replaced by a robot by now. ;-)
 
All this said, this is the second ** NOISE **-posting in this thread,
and I think I'm on much firmer ground than above when I speculate about
why people would like to inject noise here.
 
Namely, to remove focus from some grave problems with the g++ compiler
and standard library implementation, such as its codecvt producing silly
big-endian wchar_t values in Windows, and that it does not work for e.g.
Norwegian even when correct endianness is specified.
 
After all, g++ is a perfect perfect perfect compiler, yes?
 
 
Cheers!,
 
- Alf
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: