Thursday, April 8, 2021

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Apr 07 08:33PM -0400

On Wed, 7 Apr 2021 23:20:33 +0100
> method; what you have doesn't do that so you haven't got a ..
> theory, what you have is a fruit loop hypothesis that isn't in the
> slightest bit scientific.
 
It's a theory. It addresses a tremendous number of outstanding
questions about why our solar system is the way it is, why the moons are
the way they are, why there's such a stark contrast in design from the
inner "rocky" planets, and the outer "gas giants."
 
If you'd pay attention to the theory and consider it, and stop living
in a world full of hate-spewing visceral, you might learn something.
And if we worked together, we both might learn something.
 
Stop being divisive. We are stronger together than we are apart.
 
--
Rick C. Hodgin
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Apr 07 11:40PM -0400

On Wed, 7 Apr 2021 13:46:12 -0400
> Line and manufactured Earths:
 
> My 0037 response: https://www.youtube.com/watch?v=SqezC0PqVu8
> My 0036 response: https://www.youtube.com/watch?v=Okv6YPu6AHQ
 
Here's a perspective showing the age of the sea floor map wrapped
around the Earth. You can see how the lines are in the middle of the
areas they separate, except for places like North and South America,
which were presumably anchored so as to create them in this way.
 
Earth-wrapped age of the sea floor maps:
http://www.3alive.org/images/2008__sea_floor_age__continent_perspectives.png
 
And there's a video which shows that map in 3D going backward through
time. It begins around 3:10 seconds into the video. The author is
Neal Adams and he's not a Christian and believes in natural millions
and billions of years, but the principle he conveys is the same. Just
take his video and apply it to the smaller Earth theory for a
manufactured Earth and a purposeful creation of the Earth, the purpose
of which is YOU and ME. People. God is calling PEOPLE out of the sin
that besets us.
 
Growing Earth
https://www.youtube.com/watch?v=oJfBSc6e7QQ&t=190s
 
See the basic idea at my website: http://www.3alive.org
 
--
Rick C. Hodgin
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Apr 08 05:02PM +0100

On 08/04/2021 01:33, Rick C. Hodgin wrote:
> questions about why our solar system is the way it is, why the moons are
> the way they are, why there's such a stark contrast in design from the
> inner "rocky" planets, and the outer "gas giants."
 
It's not a theory and it doesn't address anything of substance whatsoever.
 
 
> If you'd pay attention to the theory and consider it, and stop living
> in a world full of hate-spewing visceral, you might learn something.
> And if we worked together, we both might learn something.
 
It's not a theory: it is a crackpot hypothesis that deserves nobody's time
except maybe a psychiatrist's.
 
 
> Stop being divisive. We are stronger together than we are apart.
 
I wouldn't work with you if someone paid me 1 million USD to.
 
/Flibble
 
--
😎
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Apr 08 12:20PM -0400

On Thu, 8 Apr 2021 17:02:01 +0100
> On 08/04/2021 01:33, Rick C. Hodgin wrote:
> > Stop being divisive. We are stronger together than we are apart.
> I wouldn't work with you if someone paid me 1 million USD to.
 
I would work with you because you are valuable, smart, capable, and your
insight would prove valuable if you ever stepped down from your
framework of hating all things of God.
 
God created this entire solar system to create Earths that take 100s of
thousands of years each to produce, and He didn't do it for the Earth's
sake. He did it for *YOUR* sake, Leigh. God loves you, and He's trying
to teach you something. Trying to undo the false teachings of the
enemy.
 
Your whole world is crumbling by this theory. It's no wonder you're
lashing out. But try to get past that outrage and seek the truth. It
will lead you to the correct place, and give you peace of mind and
security.
 
--
Rick C. Hodgin
Juha Nieminen <nospam@thanks.invalid>: Apr 08 06:11AM

> This is comp.lang.c++, better send this question to comp.lang.c
 
Why? std::scanf() is a 100% C++ standard library function.
 
What makes you think otherwise?
Juha Nieminen <nospam@thanks.invalid>: Apr 08 06:14AM

> Is there some reason you need to use scanf?
 
> Note that scanf has undefined behavior on numeric input if the value
> can't be represented in the target type.
 
What's the better alternative in standard C++?
Manfred <noname@add.invalid>: Apr 08 01:54PM +0200

On 4/8/2021 8:11 AM, Juha Nieminen wrote:
>> This is comp.lang.c++, better send this question to comp.lang.c
 
> Why? std::scanf() is a 100% C++ standard library function.
 
> What makes you think otherwise?
 
I thought this was trivial, but since you are asking...
 
Because, even if the C standard library (or most of it) has been
included in the C++ standard, this function originates in the C language
and is most used in that context.
C++ provides and actually encourages other alternatives to scanf, as
Paavo gave examples of.
 
Thus, there's a good chance that there more users in comp.lang.c that
are experienced about scanf than there are here.
In fact you can see that all replies about scanf are from regular
posters in comp.lang.c.
Paavo Helde <myfirstname@osa.pri.ee>: Apr 08 04:12PM +0300

08.04.2021 09:11 Juha Nieminen kirjutas:
>> This is comp.lang.c++, better send this question to comp.lang.c
 
> Why? std::scanf() is a 100% C++ standard library function.
 
> What makes you think otherwise?
 
As scanf() can invoke UB (read: incorrect results) very easily if the
input stream does not match the expected results, it is unusable with
any external content (i.e. basically always). On top of that, it can
easily cause buffer overruns if one is not extra careful. Buffer
overruns are a major source of security bugs in C (and undeservedly also
in C++, thanks to the people who claim there is nothing wrong with using
unsafe C functions like scanf() in C++).
 
In C++ we have better means, so scanf() should be considered obsolete in
C++ for 30 years already, and is best left unused. Don't know or care
how they are dealing with it in C.
 
A little demo: should the program read in incorrect numbers happily, or
should it report an error?
 
 
#define _CRT_SECURE_NO_WARNINGS // needed for VS2019 to accept scanf
#include <iostream>
#include <string>
#include <sstream>
#include <stdio.h>
 
int main() {
const char* buffer = "12345678912345678";
int x;
int k = sscanf(buffer, "%d", &x);
if (k == 1) {
std::cout << "scanf() succeeded and produced: " << x << "\n";
} else {
std::cout << "scanf() failed\n";
}
std::istringstream is(buffer);
if (is >> x) {
std::cout << "istream succeeded and produced: " << x << "\n";
} else {
std::cout << "istream failed.\n";
}
}
 
Output:
scanf() succeeded and produced: 1578423886
istream failed.
Juha Nieminen <nospam@thanks.invalid>: Apr 08 06:09AM

> Floating Point should ALWAYS be treated is 'imprecise', and sometimes
> imprecise can create errors big enough to cause other issues.
 
That's actually one of the most common misconceptions about floating point
arithmetic.
 
I know that you didn't mean it like this, but nevertheless, quite often
people talk about floating point values as if they were some kind of
nebulous quantum mechanics Heisenberg uncertainty entities that have no
precise well-defined values, as if they were hovering around the
"correct" value, only randomly being exact if you are lucky, but most
of the time being off, and you can never be certain that it will have
a particular value, as if they were affected by quantum fluctuations
and random transistor noise.
 
Or that's the picture one easily gets when people talk about floating
point values.
 
In actuality, floating point values have very precise determined
values, specified by an IEEE standard. They, naturally, suffer from
*rounding errors* when the value to be represented can't fit in the
mantissa, but that's it. That's not uncertainty, that's just well-defined
rounding (which can be predicted and calculated, if you really wanted).
 
Most certainly if two floating point variables have been assigned the
same exact value, you can certainly assume them to be bit-by-bit
identical.
 
double a = 1.25;
double b = 1.25;
assert(a == b); // Will never, ever, ever fail
 
If two floating point values have been calculated using the exact
same operations, the results will also be bit-by-bit identical.
There is no randomness in floating point arithmetic. There are no
quantum fluctuations affecting them.
 
double a = std::sqrt(1.5) / 2.5;
double b = std::sqrt(1.5) / 2.5;
assert(a == b); // Will never, ever, ever fail
 
Also, there are many values that can be represented 100% accurately
with floating point. Granted that them being in base-2 makes it
sometimes slightly unintuitive what these accurately-representable
values are, because we think in base-10, but there are many.
 
double a = 123456.0; // Is stored with 100% accuracy
double b = 0.25; // Likewise.
 
In general, all integers up to about 2^52 (positive or negative)
can be represented with 100% accuracy in a double-precision floating
point variable. Decimal numbers have to be thought in base-2 when
considering whether they can be represented accurately. For example
0.5 can be represented accurately, 0.1 cannot (because 0.1 has an
infinite representation in base-2, and thus will be rounded to
the nearest value that can be represented with a double.)
"Fred. Zwarts" <F.Zwarts@KVI.nl>: Apr 08 09:30AM +0200

Op 08.apr..2021 om 08:09 schreef Juha Nieminen:
> 0.5 can be represented accurately, 0.1 cannot (because 0.1 has an
> infinite representation in base-2, and thus will be rounded to
> the nearest value that can be represented with a double.)
 
Indeed. One often claims that floating point is less precise than
integer, but the rounding errors of integers are in many cases much
larger. For example (1/2)*2 gives for floating point the exact value 1,
whereas integer result in 0 instead of 1 (due to rounding errors in the
division), which is 100% off. 😉
 
--
Paradoxes in the relation between Creator and creature.
<http://www.wirholt.nl/English>.
David Brown <david.brown@hesbynett.no>: Apr 08 09:47AM +0200

On 08/04/2021 08:09, Juha Nieminen wrote:
 
> double a = 1.25;
> double b = 1.25;
> assert(a == b); // Will never, ever, ever fail
 
/Should/ never, ever, ever fail.
 
Not all floating point implementations follow IEEE standards exactly.
In particular, there are implementations which use intermediary formats
with greater precision than the formats used for memory storage. The
most common examples would be floating point coprocessors for the 68K
and x86 worlds, where 80-bit internal formats had greater precision than
64-bit doubles usually used by language implementations. A variable
that happens to be stored on the FPU's stack might then fail to compare
equal to the value of a variable stored in memory, despite being
"obviously" equal. (It's not going to happen with these particular
examples, being values with precise binary representations - I am
referring to general principles.)
 
If you need precisely controlled and replicable floating point
behaviour, make sure you use a compiler and flags that enforce IEEE
standards strictly. Many compilers do that by default, but can generate
significantly faster code if they are more relaxed (like gcc's
"-ffast-math" flag). Such relaxed behaviour is fine for a lot of code
(I believe, without reference or statistics, the great majority of
code). But you should then treat your floating point as being a little
imprecise, and be wary of using equality tests. (There is a reason gcc
has a "-Wfloat-equal" warning available.)
Juha Nieminen <nospam@thanks.invalid>: Apr 08 10:31AM

>> assert(a == b); // Will never, ever, ever fail
 
> /Should/ never, ever, ever fail.
 
> Not all floating point implementations follow IEEE standards exactly.
 
I can't think of any rational implementation, even if it used some custom
floating point representation and calculations, that could give false to
that paticular comparison.
 
The thing that one should be aware of is that if two values have been
calculated differently, they may give values that differ in one or
more of the least-significant digits of the mantissa, even when the
two expressions are mathematically equivalent.
 
For example,
 
a = x / y;
b = x * (1.0/y);
assert(a == b); // will probably fail for many values of x and y
 
Heck, even this:
 
a = (x + y) + z;
b = x + (y + z);
assert(a == b); // may fail
Bart <bc@freeuk.com>: Apr 08 12:05PM +0100

On 08/04/2021 11:31, Juha Nieminen wrote:
> calculated differently, they may give values that differ in one or
> more of the least-significant digits of the mantissa, even when the
> two expressions are mathematically equivalent.
 
In your example, 'a' can be an external variable defined in a different
module and using a different compiler.
 
1.25 is unlikely to be compiled differently, but other constants could
well be. When there is no exact binary that matches the decimal, then
it's not so easy to get the most accurate (and therefore consistent) binary.
 
I had a lot of trouble with this until I used strtod() in my compiler to
convert text to float. If I disable that, then these two programs:
 
===============================
c.c
-------------------------------
#include <stdio.h>
#include "c.h"
 
extern double a;
double b=X;
 
int main(void) {
if (a==b)
printf("%20.20e == %20.20e\n",a,b);
else
printf("%20.20e != %20.20e\n",a,b);
}
 
-------------------------------
 
===============================
d.c
-------------------------------
#include "c.h"
double a=X;
-------------------------------
 
Where c.h contains:
#define X 0.00000000000000000001
 
Produces this output when c.c is compiled with bcc and d.c with gcc:
 
9.99999999999999950000e-021 != 1.00000000000000010000e-020
 
The 9.999... value is gcc's, and presumably is closer to X.
David Brown <david.brown@hesbynett.no>: Apr 08 01:37PM +0200

On 08/04/2021 12:31, Juha Nieminen wrote:
 
> I can't think of any rational implementation, even if it used some custom
> floating point representation and calculations, that could give false to
> that paticular comparison.
 
As I said, it is hard to imagine those particular values leading to
failure, as they have exact representations in any sane floating point
representation.
 
It's a lot easier to see how you might have failures for other cases
where rounding to different precisions can affect the values.
 
> calculated differently, they may give values that differ in one or
> more of the least-significant digits of the mantissa, even when the
> two expressions are mathematically equivalent.
 
Yes, indeed.
 
 
> a = (x + y) + z;
> b = x + (y + z);
> assert(a == b); // may fail
 
Correct. IEEE-defined floating point operations are definitely not
associative - I don't think they are even always commutative, but I
could be wrong there (I am no expert in this area). One of the points
of the gcc "-ffast-math" flags is to tell the compiler it can assume
floating point arithmetic /is/ associative - either the expressions for
a and b give the same answer, or you don't care which it takes (in both
these examples). That can lead to far more efficient code.
Richard Damon <Richard@Damon-Family.org>: Apr 08 08:01AM -0400

On 4/8/21 2:09 AM, Juha Nieminen wrote:
>> imprecise can create errors big enough to cause other issues.
 
> That's actually one of the most common misconceptions about floating point
> arithmetic.
 
And I would say that THAT is one of the most common misconceptions about
floating point.
 
Yes, there are LIMITED situations where floating point will be exact, or
at least consistent, but unless you absolutely KNOW that you are in one
of those cases, thinking you are is the source of most problems.
 
And, if you are in one of those cases, very often floating point in not
the best answer by many measures (it may be the simplest, but that
doesn't always mean best)
 
Double Precision floating point numbers represent only slightly less
than 2**64 discrete values. Of the values they might represent in that
range, the odds of one chosen truely at random is minuscule.
 
They do tend to be 'close enough' as few things are actually needed to a
better precision than they represent. So that is where acknowledging
that they ARE imprecise, but precise enough for most cases, is the way
to go, so you are alert for the cases when that imprecision gets
important, like the case talked about previously.
 
> of the time being off, and you can never be certain that it will have
> a particular value, as if they were affected by quantum fluctuations
> and random transistor noise.
 
Yes, Imprecise does not mean 'random', but that shows not understanding
the meaning of terms.
 
> *rounding errors* when the value to be represented can't fit in the
> mantissa, but that's it. That's not uncertainty, that's just well-defined
> rounding (which can be predicted and calculated, if you really wanted).
 
Yes, a particular bit combination has an exact value that it represents,
and the operatons on those bits have fairly precise rules for how they
are to be performed.
 
 
 
> double a = 1.25;
> double b = 1.25;
> assert(a == b); // Will never, ever, ever fail
 
That just shows repeatability, not accuracy.
 
It also works for the values of 0.1 and 0.2 even though they are NOT
exactly represented. As can be seen be:
double a = 0.1;
double b = 0.2;
double c = 0.3;
assert(a + b == c); /* Will fail on most binary floating point
machines */
 
 
> double a = std::sqrt(1.5) / 2.5;
> double b = std::sqrt(1.5) / 2.5;
> assert(a == b); // Will never, ever, ever fail
 
Again, shows consistency not accuracy.
 
For almost all values of a we will find that
std::sqrt(a) * std::sqrt(a) != a
 
(Yes, this will succeed for some values of a, likely gotten by taking a
value with less than 26 bits of precision and squaring it.
> 0.5 can be represented accurately, 0.1 cannot (because 0.1 has an
> infinite representation in base-2, and thus will be rounded to
> the nearest value that can be represented with a double.)
 
Yes, and if you really are dealing with just integers, keep them as
integers. If you have to stop to decide if your number happens to be one
that is exact, you probably are doing something wrong and making
problems for yourself, and will likely make a mistake at some point.
Much better to either reformulate so they ALWAYS are exact or you treat
them like they are always possibly inexact.
Bonita Montero <Bonita.Montero@gmail.com>: Apr 08 03:05AM +0200


> I have used "The C++ Standard Library Extensions: A Tutorial And
> Reference" by Pete Becker for reference over the years.
> https://www.amazon.com/Standard-Library-Extensions-Tutorial-Reference/dp/0321412990
 
I found it as an epub: https://easyupload.io/znkjii
Keith Thompson <Keith.S.Thompson+u@gmail.com>: Apr 07 07:12PM -0700

>> Reference" by Pete Becker for reference over the years.
>> https://www.amazon.com/Standard-Library-Extensions-Tutorial-Reference/dp/0321412990
 
> I found it as an epub: https://[DELETED]/[DELETED]
 
Or you could just steal a paper copy from your local library, or break
into the author's house.
 
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Apr 07 07:44PM -0700

On 4/7/2021 6:05 PM, Bonita Montero wrote:
>> Reference" by Pete Becker for reference over the years.
>> https://www.amazon.com/Standard-Library-Extensions-Tutorial-Reference/dp/0321412990
 
> I found it as an epub: [...]
Oh shi%!
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Apr 07 07:46PM -0700

On 4/7/2021 7:12 PM, Keith Thompson wrote:
 
>> I found it as an epub: https://[DELETED]/[DELETED]
 
> Or you could just steal a paper copy from your local library, or break
> into the author's house.
 
Actually, I do not know if Microsoft approves of this, but here is the
Win 4.0 Kernel source code on github:
 
https://github.com/ZoloZiak/WinNT4
Bonita Montero <Bonita.Montero@gmail.com>: Apr 08 04:49AM +0200


>> I found it as an epub: https://[DELETED]/[DELETED]
 
> Or you could just steal a paper copy from your local library, or break
> into the author's house.
 
Copying isn't the same as taking something away.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Apr 07 07:51PM -0700

On 4/7/2021 7:49 PM, Bonita Montero wrote:
 
>> Or you could just steal a paper copy from your local library, or break
>> into the author's house.
 
> Copying isn't the same as taking something away.
 
You can buy a copy for 51.99 on amazon. I have not downloaded from the
link you gave.
Juha Nieminen <nospam@thanks.invalid>: Apr 08 06:16AM

> I found it as an epub:
 
Why am I not surprised that you promote illegal activity?
gazelle@shell.xmission.com (Kenny McCormack): Apr 08 07:04AM

In article <s4m743$6pb$4@gioia.aioe.org>,
>In comp.lang.c++ Bonita Montero <Bonita.Montero@gmail.com> wrote:
>> I found it as an epub:
 
>Why am I not surprised that you promote illegal activity?
 
Why am I not surprised that you are an ardent net cop?
 
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/Noam
David Brown <david.brown@hesbynett.no>: Apr 08 09:27AM +0200

On 08/04/2021 04:49, Bonita Montero wrote:
 
>> Or you could just steal a paper copy from your local library, or break
>> into the author's house.
 
> Copying isn't the same as taking something away.
 
That is correct - copyright violations are not stealing, or pirating.
But they /are/ illegal and immoral in most circumstances. Please do not
do this, or encourage it, or spread links like this. The only reason we
can have books like this is because people pay for them - that's how the
authors and the publishers make their livings. So while such copyright
violations are technically not "stealing", they /are/ a step towards
depriving people of their living, and the public of such books in the
future.
 
(Some people feel there are grey areas, such as buying the paper book
then acquiring an electronic version for convenience, or for reading
while the paper book is still in the post. This might be viewed as
illegal but not immoral, on a very subjective basis. Many publishers
these days have arrangements and licensing that allows you to do this
simply and legally instead of resorting to illegal tactics.)
Juha Nieminen <nospam@thanks.invalid>: Apr 08 10:33AM

>>> I found it as an epub:
 
>>Why am I not surprised that you promote illegal activity?
 
> Why am I not surprised that you are an ardent net cop?
 
Why? Have I given that impression in the past? Where?
 
My comment was more relevant, though, because that person has been a
complete a-hole repeatedly in this newsgroup.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: