Wednesday, June 2, 2021

Digest for comp.lang.c++@googlegroups.com - 13 updates in 5 topics

Paavo Helde <myfirstname@osa.pri.ee>: Jun 02 09:13PM +0300

02.06.2021 19:57 Bonita Montero kirjutas:
>> _HAS_ITERATOR_DEBUGGING macro (note: using this macro is deprecated by
>> MS).
 
> That has nothing to do with pseudo-UB.
 
There is no such thing as pseudo-UB. If your code contains UB, it might
work, it might not work, it might appear to work without really working,
and even if it worked it might cease to work in the next day, next
compiler, or the next compiler version. There are no guarantees.
 
A term like "pseudo-UB" implies that there is some magic guarantee that
the UB code always works as intended. Alas, UB specifically means no
guarantees, so there is no such thing. Even if you hack your UB code to
(apparently) work with some specific C++ implementation, you have no
guarantee that your code will work with any other C++ implementation, or
with the next version of the same implementation.
 
The fact that your code does not work with a specific C++ implementation
without hacking already shows that this is not a theoretical issue.
 
If the C++ committee comes around and says, yes, sorry, we make
&*vec.end() defined behavior after all, then yes, you are welcome to
write this code, now with defined behavior, and MS is obliged to fix
their now non-conforming compiler. But until this has not happened, UB
is UB.
"Alf P. Steinbach" <alf.p.steinbach@gmail.com>: Jun 02 08:27PM +0200

On 2 Jun 2021 15:12, Bonita Montero wrote:
> [snip]
> "ptr = vec.end().operator ->()" works perfectly - and even around
> _HAS_ITERATOR_DEBUGGING.
 
But why use an expression with formally Undefined Behavior when there is
a perfectly well-defined way to do it?
 
- Alf
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 02 05:23PM -0400

On 6/2/21 2:13 PM, Paavo Helde wrote:
> 02.06.2021 19:57 Bonita Montero kirjutas:
...
 
> A term like "pseudo-UB" implies that there is some magic guarantee that
> the UB code always works as intended. Alas, UB specifically means no
> guarantees,
 
 
Not quite. It means "behavior for which this document imposes no
requirements" (3.30). That wording deliberately and explicitly allows
for the possibility that something other than "this document" might in
fact impose requirements on the behavior of the code. This is, in fact,
one of the most common reasons for writing UB code (and the only
legitimate reason for doing so): an implementation provides some
guarantees for the behavior that the standard itself does not. Such code
is only portable to implementations which provide that same guarantee,
but in many contexts that's not a serious problem.
 
However, if Bonita wants to rely upon that exception, she needs to
identify the document which provides the guarantees she's relying on - and
red floyd <myob@its.invalid>: Jun 02 02:24PM -0700

On 6/2/2021 11:27 AM, Alf P. Steinbach wrote:
 
> But why use an expression with formally Undefined Behavior when there is
> a perfectly well-defined way to do it?
 
> - Alf
 
Because it works on her system and therefore by definition it's all good.
Bonita Montero <Bonita.Montero@gmail.com>: Jun 03 12:52AM +0200

> There is no such thing as pseudo-UB.
 
It's pseudo-ub because there's actually no machine where the behaviour
is not up to what you need.
 
Rest of your nonsense unread ...
Bonita Montero <Bonita.Montero@gmail.com>: Jun 03 12:53AM +0200

> However, if Bonita wants to rely upon that exception, she needs to
> identify the document which provides the guarantees she's relying
> on - and.
 
No, I don't.
I just know that there's no machine that works other than expected.
Bonita Montero <Bonita.Montero@gmail.com>: Jun 03 12:54AM +0200

>> _HAS_ITERATOR_DEBUGGING.
 
> But why use an expression with formally Undefined Behavior when there is
> a perfectly well-defined way to do it?
 
It actually works on all machines.
David Brown <david.brown@hesbynett.no>: Jun 02 11:32PM +0200

On 02/06/2021 18:43, Manfred wrote:
> that there are ABI problems with intmax_t, but I am not convinced that
> they are absolutely objective - I may think there is some weight of the
> legacy of ABI definitions as they have been structured for decades.
 
Legacy and ABI definitions are definitely the issues here, and
technically these are part of the implementation, rather than the
standard. The way the standard defines intmax_t makes it very
impractical (but not impossible) for implementations to provide larger
integer types. I see that as a problem or limitation in the standard,
rather than an implementation issue.
 
> After all, in C passing arguments of varying type is not a new issue -
> structs have been part of the ABI since the beginning of time.
 
Yes, but structs (and arrays) are defined in terms of existing scaler
types. ABI's generally do not specify how to pass larger integer types
- it is not covered by the specification for a struct comprising of two
smaller types.
 
> strong enough because of the limited range of cases where this is really
> needed. As I wrote earlier the real need is probably somewhat for a
> niche area.
 
That seems reasonable.
 
 
>> Perhaps I am missing something.  What do the div functions give you that
>> the division operators do not (assuming an optimising compiler) ?
 
> I assume you mean the division /and/ remainder operators.
 
Yes.
 
> Obviously the div functions give both formally in one operation, taking
> advantage of the ASM instructions that do that.
 
A compiler will usually do that two, given "x = a / b; y = a % b;". On
many processors, a single division instruction produces both results and
compilers will take advantage of that.
 
When I did a few tests on <https://godbolt.org>, compilers generated
calls to library "div" functions when these were given in the source
code, and direct cpu division instructions for the division and
remainder operators. I must admit it surprised me a little - I'd have
thought the "div" functions would be handled as builtins. But they are
not on the list of gcc "Other builtins" ("abs" is, as are a great many
other standard library functions). I guess the developers simply
haven't bothered - perhaps because the "div" functions are rarely used.
Certainly the operators give simpler and clearer source code, and
significantly smaller and faster object code in practice.
 
> I know that most optimizing compilers are able to combine a sequence of
> '/' and '%' into a single instruction, but this is relying on
> optimization, and thus not standardized.
 
The same could be said about calling "div" - the standard does not give
any indication that it is implemented in any particularly efficient way.
Most likely, it is done by :
 
div_t div(int a, int b) {
div_t d;
d.quot = a / b;
d.rem = a % b;
return d;
}
 
 
> relevant that some feature, if it is important to the program, be
> possible to express in source code with no need to assume some
> behind-the-scenes compiler behavior.
 
I don't disagree on that principle at all. But I /do/ disagree about
any assumptions you make about how "div" is implemented, and that it has
any required behaviour or guarantees that you don't get from the operators.
 
> More importantly, in this last point I took the div functions as one
> example, in fact there is a whole family of those, ranging from abs to
> strtol, and even printf that are involved with intmax_t.
 
The "abs" function family does not need an "intmax_t" specific version -
it could be handled by the <tgmath.h> "abs" generic macro. (That's for
C - for C++, you'd prefer a template.)
 
Some of the other functions taking or returning an "intmax_t" would add
complications, yes. That's why "intmax_t" would need to be deprecated
rather than just dropped.
Manfred <noname@add.invalid>: Jun 03 12:49AM +0200

On 6/2/2021 11:32 PM, David Brown wrote:
> not on the list of gcc "Other builtins" ("abs" is, as are a great many
> other standard library functions). I guess the developers simply
> haven't bothered - perhaps because the "div" functions are rarely used.
 
Interesting, that's surprising.
 
> Certainly the operators give simpler and clearer source code, and
> significantly smaller and faster object code in practice.
 
'Certainly' smaller and faster because you tested it. As per their
definition, there is no reason for which div should perform worse than
'/' and '%'.
In fact, the only motivation for the *div functions to exist is that
they perform better, or at the very least equal, to the pair '/' and '%'.
To me it sounds like a matter of QoI.
 
 
> I don't disagree on that principle at all. But I /do/ disagree about
> any assumptions you make about how "div" is implemented, and that it has
> any required behaviour or guarantees that you don't get from the operators.
 
Well, it's the /definition/ of "div" that it calculates the quotient and
the remainder in one go, not an assumption.
In this specific case, the same applies to performance: as I said it is
the only reason for it to exist.
If the implementation is sloppy then it's good to know, but it's also a
different matter.
 
Michael S <already5chosen@yahoo.com>: Jun 02 02:02PM -0700

On Wednesday, June 2, 2021 at 4:24:31 AM UTC+3, Keith Thompson wrote:
 
> -O3 Like -O2, except that it enables optimizations that
> take longer to perform or that may generate larger
> code (in an attempt to make the program run faster).
 
It sounds like a generic statement that was likely written at the very beginning of LLVM development and does not mean much in practice.
Same statement 6 years ago:
https://web.archive.org/web/20151027095342/https://clang.llvm.org/docs/CommandGuide/clang.html
 
In gcc, there is a list of sub-optimizations enabled by various -O flags.
https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Optimize-Options.html#Optimize-Options
I had never seen similar list for clang/LLVM.
 
Also, when observing asm code, generated by clang, I never encountered different results between -O2 and -O3.
To be fair, typically I looked at relatively small snippets, so if the differences are related to interprocedural optimizations I could have overlooked them.
 
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jun 02 11:38AM -0700

On 6/2/2021 1:23 AM, Doctor Who wrote:
>> according to politeness, rudeness, knowledge, how they treat others, etc.
 
> I find Bonita nice, polite and exceptional programmer, thus she has my
> wholehearted respect.
 
She does have a tenancy to call people stupid morons. Also, she has this
rather strange habit of failing to quote properly. Oh well, shit
happens. :^)
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Jun 02 11:39AM -0700

On 6/2/2021 10:01 AM, Manfred wrote:
>>>> ...
 
> It's also possible that Bonita and Doctor Who are the same person, and
> they are joyfully trolling along
 
Oh my. Perhaps... Na, but... Humm.... Maybe?
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Jun 02 12:07PM -0400

Greetings all,
 
I have devoted much time to this theory, and it holds up in every
regard. I have discussed it with people, shown my videos to others,
devout Christians, scientists, and nobody can debunk it with anything
that's not related to some other theory which contradicts it.
 
I announce today that I am rejecting this theory for one reason: The
theory states that a Savior is given to each cycle of Earth, and that
the people who lived during that 7,000 year cycle would be saved by
that Savior and not the one all-time Savior that the Bible teaches
exists. It is enough to discount the entire theory.
 
I believe in Jesus Christ.
 
I believe He is man's only and all-time Savior and what is happening to
us on this Earth in this existence is unique and special in His
universe.
 
I reject the Solar System Assembly Line theory on that basis.
 
--
Rick C. Hodgin
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: