Thursday, February 18, 2021

Digest for comp.lang.c++@googlegroups.com - 17 updates in 5 topics

"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 13 01:24PM -0800

On 2/13/2021 6:56 AM, David Brown wrote:
>> no generic, satisfactory solutions.
 
> I think (correct me if I'm wrong) that every system will have a limit to
> the lock-free sizes they support.
 
You are correct. I remember a long time ago when CMPXCHG16B was first
introduced, it was _not_ guaranteed that every future 64-bit x86 would
support it. This instruction on a 64-bit system is DWCAS for Double
Width Compare-And-Swap. IBM z arch has CDS for a DWCAS. Iirc, CDS is
guaranteed to be on all z-arch.
 
For fun.... Check is odd ball instruction out David:
 
CMP8XCHG16 for the Itanium.
 
;^)
 
[...]
gazelle@shell.xmission.com (Kenny McCormack): Feb 18 10:09PM

In article <87r1lmcnnx.fsf@bsb.me.uk>,
 
>Is there a published paper describing this work? I thought the idea of
>a universal compiler had bitten the dust, so I'm curious so see what
>your approach is.
 
Incidentally, if neos actually did what is claimed, there would be no need
for a "new Python implementation from scratch", since neos could, in and of
itself, compile Python source code.
 
I.e., the simple existence of neos would obviate the need to ever implement
any language, ever again, for all time.
 
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/LadyChatterley
mickspud@potatofield.co.uk: Feb 13 10:34AM

On Fri, 12 Feb 2021 18:18:59 GMT
 
>>>wouldn't know this of course as you are fucking clueless anachronism.
 
>>Oh dear, someone tell the child about copy-on-write.
 
>copy-on-write and overcommit are two orthogonal concepts.
 
They're really not.
Nikki Locke <nikki@trumphurst.com>: Feb 14 11:23PM

Available C++ Libraries FAQ
 
URL: http://www.trumphurst.com/cpplibs/
 
This is a searchable list of libraries and utilities (both free
and commercial) available to C++ programmers.
 
If you know of a library which is not in the list, why not fill
in the form at http://www.trumphurst.com/cpplibs/cppsub.php
 
Maintainer: Nikki Locke - if you wish to contact me, please use the form on the website.
James Kuyper <jameskuyper@alumni.caltech.edu>: Feb 17 10:49PM -0500

On 2/17/21 4:07 PM, Keith Thompson wrote:
>> thought it was required to be defined.
 
> Do most implementations actually *define* (i.e., document) the behavior
> of passing a non-void* pointer with a %p format specifier?
 
I meant "define" only in the sense that they actually do something
useful, not that they've necessarily publicized that fact. In practice,
on systems where all pointer types have the same representation, they'd
have to go out of their way to make such code break.
James Kuyper <jameskuyper@alumni.caltech.edu>: Feb 17 10:57PM -0500

On 2/17/21 1:43 PM, Manfred wrote:
> On 2/17/2021 5:30 PM, David Brown wrote:
>> On 17/02/2021 17:00, Scott Lurndal wrote:
...
> The standard does not mandate the behaviour that is shown by your
> example, so even if compilers do compensate for inefficiency of the
> code, this does not make the language good.
 
The point is, the standard doesn't mandate that behavior for ANY of the
functions he defined. The standard doesn't even address the issue,
beyond giving implementations the freedom to generate any code that has
the same required observable behavior.
 
> The fact that the compiler puts a remedy to this by applying some
> operation that is hidden to the language specification does not make the
> language itself any more efficient.
 
The language itself is efficient because it's been quite deliberately
and carefully designed to allow implementations that are as efficient as
this one is. It's also inefficient, in that it doesn't prohibit
implementations that would convert all four functions into the same code
you'd naively expect to see generated for getb3().
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 17 09:02PM -0800

On 2/7/2021 10:43 AM, James Kuyper wrote:
>> #include "ppscale.h"
>> #include "ppscale.c"
>> }
[...]
 
Well, I have personally included a .c file. However, I have had to work
with a team that did that. I just never have had the need to do it.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 17 09:03PM -0800

On 2/17/2021 9:02 PM, Chris M. Thomasson wrote:
>>>     #include "ppscale.c"
>>>     }
> [...]
 
ARGHGHGH!
 
> Well, I have personally included a .c file. However, I have had to work
^^^^^^^^^^^^
NEVER
 
Well, I have personally NEVER included a .c file. However, I have had to
work
 
 
David Brown <david.brown@hesbynett.no>: Feb 18 09:05AM +0100

On 17/02/2021 22:17, Chris Vine wrote:
> And I don't think that C++03 was the source of this but if it was, I am
> absolutely certain that the authors of C++03 didn't think they were
> declaring past (and future) accepted practice to be invalid.
 
To my understanding (and I freely admit I didn't follow the changes in
C++ over time to the same extent as I did C), one of the changes in
C++03 was to give a clear (well, as clear as anything in these standards
documents...) specification of the memory model. Until then, a lot more
had been left up to chance of implementation.
 
At this time, compilers were getting noticeably smarter and doing more
optimisation - and processors were getting more complex (speculative
execution, out of order, multiprocessing, and so on). The old "it's all
obvious, right?" attitude to memory and object storage was not good enough.
 
It was not about trying to make old code invalid, it was about trying to
say what guarantees the compiler had to give you no matter what
optimisations it used or what processor you ran the code on. And like
many aspects of the C and C++ standards, it was based on trying to
understand what compilers did at the time, and what a substantial number
of programmers wrote - trying to get a consistent set of rules from
existing practice. Of course it is inevitable that some existing
practices and compilers would have to be changed.
 
(Again, I am /not/ claiming they got everything right or ideal here.)
 
> want to construct an object in uninitialized memory and cast your
> pointer to the type of that object? Fine, you complied with the
> strict-aliasing rule.
 
The "-fno-strict-aliasing" flag in gcc is a great idea, and I would be
happy to see a standardised pragma for the feature in all (C and C++)
compilers. But it is not standard. So in the beginning, there was the
"strict aliasing rule" ("effective type rules" is perhaps more accurate,
or "type-based aliasing rules"). There was no good standard way to get
around them - memcpy was slow (compilers were not as smart at that
time), there was no "-fno-strict-aliasing" flag to give you guarantees
of new semantics, and even union-based type punning was at best
"implementation defined". People wrote code that had no
standards-defined behaviour but worked in practice because compilers
were limited.
 
The code was /wrong/ - but practicalities and real life usually trump
pedantry and nit-picking, and code that /works/ is generally all you need.
 
 
Later C and C++ standards have gradually given more complete
descriptions of how things are supposed to work in these languages.
They have added things that weren't there in the older versions. C++03
did not make changes so that "X x(1); new (&x) X(2); x.foo();" is
suddenly wrong. It made changes to say when it is /right/ - older
versions didn't give any information and you relied on luck and weak
compilers. And C++17 didn't change the meanings here either - it just
gave you a new feature to help you write such code if you want it.
 
 
> Examples appeared in all the texts and websites,
> including Stroustrup's C++PL 4th edition. No one thought C++03 had the
> effect you mention.
 
Lots of C and C++ books - even written by experts - have mistakes. They
also have best practices of the day, which do not necessarily match best
practices of now. And they are certainly limited in their prediction of
the future.
 
Remember, the limitation that is resolved by adding std::launder stems
from C++03 (because before that, no one knew what was going on as it
wasn't specified at all), but the need for a fix was not discovered
until much later. That's why it is in C++17, not C++03. C++ and C are
complex systems - defect reports are created all the time.
 
> conceptual error, casting (!) the compiler's job (deciding whether code
> can be optimized or not) onto the programmer. Write a better compiler
> algorithm or don't do it at all.
 
I appreciate your point, and I am fully in favour of having the compiler
figure out the details rather than forcing the programmer to do so. I
don't know if the standard could have been "fixed" to require this here,
but certainly that would have been best.
 
However, it is clear that the use-cases of std::launder are rare - you
only need it in a few circumstances. And the consequences of letting
the compiler assume that const and reference parts of an object remain
unchanged are huge - devirtualisation in particular is a /massive/ gain
on code with lots of virtual functions. Do you think it is worth
throwing that out because some function taking a reference or pointer to
an object might happen to use placement new on it? How many times have
you used classes with virtual functions in your code over the years?
How many times have you used placement new on these objects?
 
If people needed to add std::launder a dozen times per file, I could
understand the complaints. If the new standards had actually changed
existing specified behaviour, I could understand. If they had changed
the meaning or correctness of existing code, I'd understand. (And that
/has/ happened with C++ changes.) But here it is just adding a new
feature that will rarely be needed.
David Brown <david.brown@hesbynett.no>: Feb 18 10:09AM +0100

On 17/02/2021 22:05, Bo Persson wrote:
 
 
> https://en.cppreference.com/w/cpp/numeric/bit_cast
 
> constexpr double f64v = 19880124.0;
> constexpr auto u64v = std::bit_cast<std::uint64_t>(f64v);
 
Yes, std::bit_cast will be useful in some cases.
 
I don't know if the practice will be much neater than memcpy for cases
such as reading data from a buffer - you'd still need somewhat ugly
casts for accessing the data (such as to reference a 4-byte subsection
of a large unsigned char array). Of course that kind of stuff can be
written once in a template or function and re-used.
 
It would have the advantage over memcpy of being efficient even on
weaker compilers, as it will not (should not!) lead to a function call.
But are there compilers that can't optimise simple memcpy and also
support C++20?
 
I think the clearest use-case for bit_cast will be in situations where
you would often use union-based type punning in C:
 
uint32_t float_bits_C(float f) {
union { float f; uint32_t u; } u;
u.f = f;
return u.u;
}
 
uint32_t float_bits_Cpp(float f) {
return std::bit_cast<uint32_t>(f);
}
 
or if you want to combine them :-) :
 
uint32_t float_bits(float f) {
union { float f; uint32_t u; } u;
u.f = f;
#ifdef __cplusplus
u.u = std::bit_cast<uint32_t>(u.f);

No comments: