Friday, April 27, 2018

Digest for comp.lang.c++@googlegroups.com - 14 updates in 2 topics

woodbrian77@gmail.com: Apr 27 10:20AM -0700

If anyone would like a demonstration of on-demand code
generation, please let me know. It usually takes less
than 10 minutes if you have a compiler with C++ 2017
support. The first step is to clone/download the
software here: https://github.com/Ebenezer-group/onwards
.
 
I also have an offer to help someone who is willing
to use my software. I'll spend 16 hours a week for
six months on your project if we use my software as
part of the project. There are more details here:
http://webEbenezer.net/about.html
 
 
Brian
Ebenezer Enterprises - Enjoying programming again.
http://webEbenezer.net
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 27 08:44AM +0200

Hello.
 
I'm aware of all the quirks with FP math.
However I thought comparing std::numeric_limits<double>::max() to
std::numeric_limits<double>::max() should yield a deterministic result.
Am I correct?
 
 
 
I've got code that works when compiled without optimizations, but fails
with clang 4.0's "-O" option.
 
By eliminating "if"s and irrelevant variables, it can be reduced to
something like the following:
 
> else
> std::cout<<false<<std::endl;
> }
 
This snippet in fact works up to -O3, always outputting 1.
The original, more complex, code, however, only works without optimizations.
 
Before I start trying different compilers (or versions), is my original
assumption true or wrong?
 
bye & Thanks
av.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Apr 27 09:36AM +0200

On 27.04.2018 08:44, Andrea Venturoli wrote:
> optimizations.
 
> Before I start trying different compilers (or versions), is my original
> assumption true or wrong?
 
I think the shown code should work, as it appears you say that it does,
but generally, for /arithmetic results/ it depends on the platform.
 
On the PC the original 1980's (actually 1979) architecture was a main
processor with a separate math co-processor. The co-processor calculated
internally with 80-bit floating point, for precision, while the main
program used 64 bit floating point, presumably to save memory (as a
16-bit processor it couldn't fit those values into registers anyway).
And the PC architecture's evolution is a study in backwards
compatibility: AFAIK to this day it works /as if/ it were like that.
 
This means that depending on the optimization level some operations can
be done in 80-bit format, some as-if in 64-bit format (when the result
is converted back down), and not always with exactly the same result at
the end of a sequence of operations even when you start with apparently
the same values.
 
Cheers & hth.,
 
- Alf
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 27 11:48AM +0200

On 04/27/18 09:36, Alf P. Steinbach wrote:
 
> I think the shown code should work
 
No certainity, though :(
 
 
 
 
 
> 16-bit processor it couldn't fit those values into registers anyway).
> And the PC architecture's evolution is a study in backwards
> compatibility: AFAIK to this day it works /as if/ it were like that.
 
Right, I studied this several years ago. However I'm not sure this still
holds today, with all other instruction sets (e.g. SSE) possibly being
used by the compiler.
 
 
 
 
 
> is converted back down), and not always with exactly the same result at
> the end of a sequence of operations even when you start with apparently
> the same values.
 
This shouldn't be the case, though.
We start with a 64b value (std::numeric_limits<double>::max()) and we
compare it to a 64b value without any intermediate math.
Even if it was converted to 80b and back it shouldn't lose precision,
should it?
 
I added:
>std::cout<<std::setprecision(20)<<std::fixed<<A<<"\n"<<std::numeric_limits<double>::max()<<std::endl;
and get:
> 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00000000000000000000
> 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00000000000000000000
 
The two numbers are the same to the last digit.
 
If the problem was an 80b intermediate format, the only explanation
would be that, when converted to 80b values, the least significant bit
are not zeroed, but left undefined.
This would surprise me a lot.
 
 
 
bye & Thanks
av.
Paavo Helde <myfirstname@osa.pri.ee>: Apr 27 01:00PM +0300

On 27.04.2018 9:44, Andrea Venturoli wrote:
 
> This snippet in fact works up to -O3, always outputting 1.
> The original, more complex, code, however, only works without
> optimizations.
 
The C++ standard says that if a value is exactly representable in the
destination floating-point type then it must be stored exactly. I
believe this means that if you store an integer 0 or 1 to an IEEE double
you can compare directly with 0 or 1, because all 32-bit integers are
exactly representable in an IEEE double.
 
Of course, any computations will ruin that result, e.g.
 
double x = 1;
assert(x==1); // guaranteed with IEEE floating-point
x += 0.0;
assert(x==1); // not guaranteed any more as far as I understand
 
One can argue that std::numeric_limits<double>::max() ought to be
exactly representable in double, by definition, and the above
considerations should hold. Nevertheless, comparing with
std::numeric_limits<double>::max() seems pretty fragile, I would use
std::numeric_limits<double>::quiet_NaN() instead if I needed a special
double value (together with a
static_assert(std::numeric_limits<double>::has_quiet_NaN)).
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 27 12:22PM +0200

On 04/27/18 12:00, Paavo Helde wrote:
 
Thanks for your answer.
 
> One can argue that std::numeric_limits<double>::max() ought to be
> exactly representable in double, by definition, and the above
> considerations should hold.
 
Do you think this is a bug in Clang/LLVM then?
 
 
 
 
 
> Nevertheless, comparing with
> std::numeric_limits<double>::max() seems pretty fragile
 
Why then?
 
 
 
> I would use
> std::numeric_limits<double>::quiet_NaN() instead if I needed a special
> double value
 
Well, max() has other advantages, like being able to be used with other
operators besides ==.
 
In my case the function is returning A=max(), so that x<A will hold.
Using NaN, would mean mean first checking if A is NaN, then acting as
normal, while returning max() does not require two comparisons in many
places (except of course in the case where I'm seeing the problem I
wrote about).
 
For now I changed the code to use a big enough number (i.e. 100 or 1000)
for the specific situation, instead, but max() must have looked like a
"sure" value.
 
 
 
bye & Thanks
av.
Manfred <noname@invalid.add>: Apr 27 01:43PM +0200

On 4/27/2018 12:22 PM, Andrea Venturoli wrote:
>> exactly representable in double, by definition, and the above
>> considerations should hold.
 
> Do you think this is a bug in Clang/LLVM then?
 
I think the key point here is about optimizations: the values /should/
compare equal, but apparently as they are propagated through different
pipelines something changes. One thing I could imagine is that the final
comparison be performed as long double, and there a difference is
(wrongly) detected.
For the sake of curiosity, inspecting the binary representation of the
values (as hex bytesequence) and/or the generated asm might give some
more insight.
 
This in theory, but the practice what would be the goal of the code? The
question is about the best usage of FP arithmetics; IME this kind of
problems are usually solved by revising the rationale at the basis of
the code - more below.
 
 
>> Nevertheless, comparing with std::numeric_limits<double>::max() seems
>> pretty fragile
 
> Why then?
 
In general, /any/ comparison for exact equality between FP values is
inherently brittle.
As Paavo pointed out, in principle exact representation of any FP value
is lost after any math operations - this means any FP value actually
used in any real-world code.
 
>> needed a special double value
 
> Well, max() has other advantages, like being able to be used with other
> operators besides ==.
 
Back to the "in practice" point, I wonder what is the use of max() in
the actual code.
Usually when I need a large value for initialization I use HUGE_VAL
(which translates to infinity() in numeric_limits), which is guaranteed
to yield the correct result for < > comparison with any other value.
When I need to test for finiteness, is_bounded() (or the finite()
function where it is available) provide the desired result.
If the problem domain has some specific bounds, then I would use such
specific limits instead.
 
In short, it looks like it would be advisable to modify the code to
avoid the == comparison in the first place.
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 27 01:57PM +0200

On 04/27/18 13:43, Manfred wrote:
 
> As Paavo pointed out, in principle exact representation of any FP value
> is lost after any math operations - this means any FP value actually
> used in any real-world code.
 
Agree on that.
Except there are no math operations here: just assignment/function
return/comparison.
 
 
 
 
 
> Back to the "in practice" point, I wonder what is the use of max() in
> the actual code.
 
Return a value that V, that, for each reasonable value of x, will yield x<V.
 
 
 
> If the problem domain has some specific bounds, then I would use such
> specific limits instead.
 
Agree.
That's how I've modified the code now (luckily I was able to bound the
domain).
Still... you know curiosity... :)
 
 
 
bye & Thanks
av.
Manfred <noname@invalid.add>: Apr 27 02:02PM +0200

On 4/27/2018 1:57 PM, Andrea Venturoli wrote:
>> the actual code.
 
> Return a value that V, that, for each reasonable value of x, will yield
> x<V.
 
In general, infinity() is what has been designed for the purpose.
 
Ralf Goertz <me@myprovider.invalid>: Apr 27 02:10PM +0200

Am Fri, 27 Apr 2018 13:57:59 +0200
 
> Agree on that.
> Except there are no math operations here: just assignment/function
> return/comparison.
 
Do I understand correctly, you assign a floating point value to a
variable do nothing mathematically with it and compare it later with a
similarly treated floating point value? If that is the case can't those
values be translated to something more reliably comparable like int via
hashing or string via decimal representation?
Manfred <noname@invalid.add>: Apr 27 02:23PM +0200

On 4/27/2018 2:02 PM, Manfred wrote:
 
>> Return a value that V, that, for each reasonable value of x, will
>> yield x<V.
 
> In general, infinity() is what has been designed for the purpose.
 
Forget this. This was dumb..
Andrea Venturoli <ml.diespammer@netfence.it>: Apr 27 03:15PM +0200

On 04/27/18 14:10, Ralf Goertz wrote:
 
> Do I understand correctly,
 
Partially.
 
 
 
> similarly treated floating point value? If that is the case can't those
> values be translated to something more reliably comparable like int via
> hashing or string via decimal representation?
 
I've got:
 
> else
> return [some computation];
> }
 
Then, in several places:
 
> double A=f(...);
> while (x<A) ...
 
or
 
> double A=f(...);
> if (x<A) ...
 
or
 
> double A=f(...),x=min(A,...);
 
etc...
 
 
 
Only in a few places I see:
 
> ...;
> else
> ...
 
 
 
bye & Thanks
av.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 27 07:26AM -0700


> This snippet in fact works up to -O3, always outputting 1. The
> original, more complex, code, however, only works without
> optimizations.
 
Looking at the C and C++ standards, I don't see any wiggle room.
The function std::numeric_limits<double>::max() is of type
double, and must return a particular (finite) value. Any
particular value must compare equal to itself.
 
You should post code that shows the problem you're asking about.
By "reducing" the code what you've actually done is change it so
the problem is no longer there. Always post code that does in
fact exhibit the behavior you want to ask about.
Marcel Mueller <news.5.maazl@spamgourmet.org>: Apr 27 04:59PM +0200

On 27.04.18 08.44, Andrea Venturoli wrote:
> Before I start trying different compilers (or versions), is my original
> assumption true or wrong?
 
The assumption is true. In fact I was not able to reproduce your problem
with different versions of gcc from 3.35 up to 6.3.0 on different
platforms (Linux amd64, Linux x86, OS/2, Linux ARMv6) with optimizations
even with -ffast-math.
 
 
Marcel
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: