Wednesday, June 23, 2021

Digest for comp.lang.c++@googlegroups.com - 5 updates in 2 topics

Andrey Tarasevich <andreytarasevich@hotmail.com>: Jun 23 10:17AM -0700

On 6/11/2021 10:55 AM, SIMON wrote:
 
>> myStack.emplace(30);
 
> It does the same thing as far as I can see but surely there must be
> something that I don't know about these two methods.
 
Without the exact type of 'myStack' the question is meaningless and does
not have an answer.
 
--
Best regards,
Andrey Tarasevich
Juha Nieminen <nospam@thanks.invalid>: Jun 23 06:19AM


> Why you move goalposts? If something is stated to be formally UB then it is.
> It does not matter that it appears to work in actual implementations as in
> any next patch it may stop working.
 
How is asking for an example "moving the goalposts"? Do you even know what
that expression means?
 
My request is prompted by your statement that you are keeping to C++14
because many things are not UB there that are in C++17. I'm asking for
an example where you specifying -std=c++14 is actually useful, and
results in code that works, while the same code does not work with
-std=c++17.
 
*How* are you keeping to C++14? By doing what? By avoiding what?
How are you enforcing it? If you, for example, write a library or
something, do you check that the C++ version is at most C++14, else you
issue an #error? What exactly are you doing to prevent your code
from being compiled as C++17? If you are explicitly using the
-std=c++14 compiler option, in which situation does that make a
difference?
"Öö Tiib" <ootiib@hot.ee>: Jun 23 04:18AM -0700

On Wednesday, 23 June 2021 at 09:19:18 UTC+3, Juha Nieminen wrote:
> > any next patch it may stop working.
 
> How is asking for an example "moving the goalposts"? Do you even know what
> that expression means?
 
I did complain that several things that were used in lot of code including in
code of widely used open source libraries were suddenly changed and even
made into undefined behavior by C++17. You started to ask for compilers
that do something different because of C++17. I gave example of code that
silently changed its behavior. Now you require compilers that somehow
manifest also added undefined behaviors in different ways. It is odd request
as I avoid writing undefined behaviors and avoid using tools on what my
code is undefined beavior. Prominent names like Sean Parent and Dave
Abrahms have ranted about it and even promised to gang up to work
something out.
 
> an example where you specifying -std=c++14 is actually useful, and
> results in code that works, while the same code does not work with
> -std=c++17.
 
Yes one example of that I gave. Also the added silently behavior-changing
changes were "applied retroactively to previously published C++ standards".
So even newer compilers in C++14 mode do not behave like C++14 was
published. But others can be happy about what is going on with C++,
language that I once liked.

> *How* are you keeping to C++14? By doing what? By avoiding what?
> How are you enforcing it?
 
Simply by using particular versions of tools in tool-chain. That has been always
so. Any change can cause whatever problems totally unrelated to what we
discuss now. For example operating system update by manufacturer can be
malicious as Apple has aptly demonstrated so it must be evaluated.
 
> from being compiled as C++17? If you are explicitly using the
> -std=c++14 compiler option, in which situation does that make a
> difference?
 
Yes, I issue diagnostics when whatever is changed to platforms/versions
where code hasn't been tested. It is the sole sane thing to do with products
whose correctness of behavior matters to anything. It is orthogonal to
what we discuss now. I'm just engineer who makes things to work even
on defective platforms and with defective tools but C++17 is too defective.
scott@slp53.sl.home (Scott Lurndal): Jun 23 02:32PM

>> point of the test actually is.
 
>You are right. I only did the test because Sam claimed an atomic is just
>as fast as a plain int
 
In the general case, an atomic access will be just
as fast as a non-atomic access to the same location.
 
It is only when the atomic is
contended for by multiple agents (e.g. cores/threads) that there
will be a potential performance impact on the application as
other threads contend for the cache line.
 
In both cases, the target location will be cached[*] in a cache
local to the core/thread performing the access. The atomic
RMW operation is required to ensure that the cache line isn't
evicted or invalidated between the R and W, however absent
contention, the RMW performs better than separate load/inc/store
operations as the RMW operation is actually performed by the
cache.
 
Unaligned "atomic" accesses that cross cache lines should be
avoided (and generally aren't easy to do from high level languages).
 
 
[*] Assuming standard user-mode code where the memory type is
marked as cachable (write-back or write-through) in the MTRR
(intel) or MAIR (ARM) registers.
David Brown <david.brown@hesbynett.no>: Jun 23 04:55PM +0200

On 23/06/2021 16:32, Scott Lurndal wrote:
>> as fast as a plain int
 
> In the general case, an atomic access will be just
> as fast as a non-atomic access to the same location.
 
That is going to depend on at least four things, perhaps more. (I'm
sure you know all these, but perhaps you were making assumptions that
aren't necessarily true.)
 
If the memory ordering is anything other than "relaxed", it's likely
that the compiler will have to generate synchronisation instructions.
For "sequential consistency" ordering, that can mean full flushes of
write buffers and pipeline stalls until these are complete. These can
lead to significant delays depending on the system and the instructions
- I seem to remember reading about figures of up to 200 cycles.
 
The ISA and processor architecture can make a difference. Processors
with strong memory ordering can have some of these delays simply due to
the locked bus operations, even when "relaxed" ordering is given. (On
the other hand, they are usually designed to reduce the cost of
sequentially consistent accesses.)
 
If the access is not just a read or a write, but an RMW operation, then
on many processors this requires bus locking of some sort,
compare-and-swap loops, load-store-exclusive loops, spin locks, or other
mechanisms to work correctly. The time taken here is much larger than
for accessing a plain int.
 
And if the object being accessed atomically is bigger than the processor
can handle as a single access (obviously this won't happen for an "int",
except on an 8-bit device), then again you'll need locks or loops of
some sort.
 
These all happen even if there is no contention for the variable at the
time.
 
 
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: