Wednesday, May 2, 2018

Digest for comp.lang.c++@googlegroups.com - 22 updates in 11 topics

boltar@cylonHQ.com: May 02 11:25AM

On Tue, 01 May 2018 20:12:14 +0300
>e.g. from 1/2^32 to 2^32 (i.e. 2E-10 .. 4E+9), which is pretty narrow
>compared to current double range 2E-308 .. 2E+308 (assuming 2x32-bit
>integers here for a fair comparison with a 64-bit double). A lot of
 
The current range might be large, but it has huge gaps in it where the
number simply can't be represented accurately and towards the top and
bottom of the range the current system is virtually useless.
 
>actually needed calculations involve things like gigahertzes or
>picometers which would not be easily representable/convertible in that
>system.
 
Probably not, OTOH how often do such small numbers get used in most
software? The GNU GMP library handles arbitrarily small values already and
if you really want floating point accuracy you'd use that anyway.
 
>bothered to implement it. Things like pi or sin(1) would still be
>represented inexactly, as well as seemingly innocent things like
>100000/100001 + 100001/100000.
 
Pi can't be represented accurately on any machine without an infinite
register size anyway. And plenty of division operations will exceed current
FPU limits. However I think trading apparent range for mathematical
accuracy in the CPU would be worth it.
boltar@cylonHQ.com: May 02 11:28AM

On Tue, 1 May 2018 10:19:17 -0700 (PDT)
>infinitely times as many unrepresentable values as representable ones,
>and that fact will always result in some comparisons failing that
>should, mathematically, have been true (and vice versa).
 
Yes, but at least with fixed point you know what the limits are and that
within those you will always get accurate results. With the current system
you never know whether a == comparison will work or not which means that you
can never use it if you want consistent code behaviour.
jameskuyper@alumni.caltech.edu: May 02 06:22AM -0700

> >integers here for a fair comparison with a 64-bit double). A lot of
 
> The current range might be large, but it has huge gaps in it where the
> number simply can't be represented accurately
 
Huge gaps? Throughout the entire range between DBL_MIN and DBL_MAX, the
gaps bracketing x are never larger than x*DBL_EPSILON, and can be
smaller than that by a factor as large as FLT_RADIX. DBL_EPSILON is
defined in <cfloat>, with specifications incorporated by reference from
the C standard (C+ 21.3.6p1), which requires that DBL_EPSILON be no
larger than 1E-9 (C 5.2.4.2.2p13), and if
std::numeric_limits<double>::is_iec559(), then DBL_EPSILON would be
2.2204460492503131E-16. Those are pretty small gaps, as far as I'm
concerned. The fact that they get larger for larger values of x matches
the way numbers are typically used in real life: less absolute precision
is needed when working with large numbers than with small ones; the
relative precision needed tends to be roughly constant over the entire
range of representable numbers.
 
> ... and towards the top and
> bottom of the range the current system is virtually useless.
 
Yes - and with floating point representations, unlike the one you're
proposing, the top and bottom of the range are well outside the range of
normal use. The top and bottom of the range of your system both fall
well within the range of many ordinary scientific and engineering
calculations - it doesn't have enough precision to handle calculations
with very small numbers well, and it overflows too easily for
calculations involving very large numbers.
 
 
> Probably not, OTOH how often do such small numbers get used in most
> software? The GNU GMP library handles arbitrarily small values already and
> if you really want floating point accuracy you'd use that anyway.
 
No, I wouldn't - floating point is much faster than GMP, and can handle
such cases with accuracy that is more than sufficient for typical uses
of such numbers.
 
> register size anyway. And plenty of division operations will exceed current
> FPU limits. However I think trading apparent range for mathematical
> accuracy in the CPU would be worth it.
 
The fact that you hold that belief suggests that you don't do number-
crunching for a living. Those who do tend to have a strong preference
for constant relative precision over a large range, rather than a
constant absolute precision over a very limited range. That's the reason
why floating point representations are popular.
jameskuyper@alumni.caltech.edu: May 02 06:44AM -0700


> Yes, but at least with fixed point you know what the limits are and that
> within those you will always get accurate results. With the current system
> you never know whether a == comparison will work or not ...
 
You might not know - but I do, and so do most people who do serious
number crunching for a living. The answer, of course, is that it almost
never makes sense to compare floating point values for exact equality,
for reasons that have more to do with the finite precision of
measurements than it does with the finite precision of floating point
representations. The main exceptions are flag values (with the case
under discussion in this thread being a prime example).
 
Fixed point and decimal floating point can make a lot of sense when most
of the numbers you're working with can be represented exactly as decimal
fractions, and have a small fixed maximum number of digits after the
decimal point - a situation that applies to many financial calculations.
 
However, fixed point has too limited a range, and too little precision
when working with small numbers, to be useful in most contexts where
you're doing calculations based upon measurements of physical quantities,
as is commonplace in scientific and engineering applications. Decimal
floating point has no advantage over binary floating point in such
contexts, and will generally make (marginally) less efficient use of
memory and/or time than binary floating point.
Paavo Helde <myfirstname@osa.pri.ee>: May 02 10:01PM +0300

> within those you will always get accurate results. With the current system
> you never know whether a == comparison will work or not which means that you
> can never use it if you want consistent code behaviour.
 
Ah, good to see you are starting to understand the matters. Make that
last sentence "you can almost never use it" and I believe most people
would agree.
 
You know, there are a lot of things in C++ which one should almost never
use, starting from trigraphs and strtok() and up to std::list and
multiple inheritance. Adding a floating-point == comparison to this list
is no big deal.
Manfred <noname@invalid.add>: May 02 11:53PM +0200

>> { ::std::cout <<( 0.1 + 0.2 == 0.3 )<< '\n'; }
 
>> transcript
 
>> 0
 
[...]
> compare numbers then that rather goes against its whole raison d'etre. Yes
> I know there are ways around it but frankly one shouldn't have to bugger about
> doing conversions just to carry out such a basic operation.
 
This obvious for integer arithmetic, but for floating point math it is a
totally different matter.
 
This behavior is the direct consequence of the finiteness of binary
number representation, combined with floating point technology, which is
one of the major features of computers.
Besides, this is in fact a non-problem in the main application field
where floating point math is required, which is scientific/engineering
computing where FP numbers typically represent physical quantities.
An other major computing application field is finance, but, as someone
else correctly pointed out, currencies are better handled in fixed point
anyway (and yet elsethread someone reported that IBM saw a business
opportunity in decimal number representation).
 
In physics and engineering the following makes no sense:
if (this_apple weighs /exactly/ 0.2kg) then { do something; }
 
This is to say that floating point arithmetic is not designed to yield
/exact/ results, but this is not a problem for the application domains
for which it is targeted.
A consequence of such approximate computing is that FP math requires
some specific skills.
 
From the gcc manpage:
-Wfloat-equal
Warn if floating-point values are used in equality
comparisons.
 
The idea behind this is that sometimes it is convenient (for
the programmer) to consider floating-point values as
approximations to infinitely precise real numbers. If you
are doing this, then you need to compute (by analyzing the
code, or in some other way) the maximum or likely maximum
error that the computation introduces, and allow for it when
performing comparisons (and when producing output, but that's
a different problem). In particular, instead of testing for
equality, you should check to see whether the two values have
ranges that overlap; and this is done with the relational
operators, so equality comparisons are probably mistaken.
Sky89 <Sky89@sky68.com>: May 02 04:15PM -0400

Hello..
 
 
Is reference counting slower than GC?
 
I think that this Java and other garbage collected interpreters and
compilers are not general purpose, read for example this:
 
 
"Total memory matters
 
GC tends to result in more memory being used in total. The trigger for
scanning is often low memory conditions, meaning that a lot of memory
needs to be allocated prior to a scanning operation. This has
significant OS level implications: memory used by the app must be
swapped in/out, and also prevents other apps, and file caches, from
using the memory.
 
It's my feeling, from experience, that it's this aspect of GC'd
applications that cause the biggest performance impact. The overall
system just goes slower due to the large amount of memory involved. It
doesn't seem like it needs to be a fundamental problem of a GC though.
The structure of a standard library, and typical programming paradigms,
contributes a lot to this effect."
 
 
Read more here:
 
https://mortoray.com/2016/05/24/is-reference-counting-slower-than-gc/
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Siri Cruise <chine.bleu@yahoo.com>: May 02 02:06PM -0700


> Is reference counting slower than GC?
 
Reference count distributes the cost over the entire execution while GC tends to
split between program execution and collection.
 
> I think that this Java and other garbage collected interpreters and
> compilers are not general purpose, read for example this:
 
GC is more general. It can manage any kind cyclic and acyclic graph. Reference
count only works on cyclic graphs if the programmer distinguishes forward edges
from back edges as they are made which can put a considerable burden on the
programmer. With GC you just add edges as needed and then you can use something
like Tarjan to distinguish forward and back edges if needed.
 
> GC tends to result in more memory being used in total. The trigger for
 
Typical modern computers have lots of memory. I've been using Boehm for years
without a problem while Safari Web Content, presumed reference count because It
Came From Apple, regularly crashes on a memory leak, sometimes taking out the
whole system.
 
Apple went with reference count despite the cyclic graph problem because they
more concerned about an interface freezing in a collection phase than that they
will have to deal with cyclic graphs. My code generally has to be prepared for
cyclic graphs.
 
--
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted. @
'I desire mercy, not sacrifice.' /|\
I'm saving up to buy the Donald a blue stone This post / \
from Metebelis 3. All praise the Great Don! insults Islam. Mohammed
Sky89 <Sky89@sky68.com>: May 02 05:33PM -0400

On 5/2/2018 5:06 PM, Siri Cruise wrote:
> more concerned about an interface freezing in a collection phase than that they
> will have to deal with cyclic graphs. My code generally has to be prepared for
> cyclic graphs.
 
Hello,
 
"Perhaps the most significant problem is that programs that rely on
garbage collectors often exhibit poor locality (interacting badly with
cache and virtual memory systems), occupy more address space than the
program actually uses at any one time, and touch otherwise idle pages.
These may combine in a phenomenon called thrashing, in which a program
spends more time copying data between various grades of storage than
performing useful work. They may make it impossible for a programmer to
reason about the performance effects of design choices, making
performance tuning difficult. They can lead garbage-collecting programs
to interfere with other programs competing for resources"
 
 
Thank you,
Amine Moulay Ramdane.
Lynn McGuire <lynnmcguire5@gmail.com>: May 02 04:21PM -0500

"C Is Not a Low-level Language"
https://queue.acm.org/detail.cfm?id=3212479
 
"Your computer is not a fast PDP-11."
 
Sigh, another proponent of "C sucks".
 
Lynn
rick.c.hodgin.rick.c.hodgin@gmail.com: May 02 11:09AM -0700

Jesus confirmed we all have sin
rick.c.hodgin.rick.c.hodgin@gmail.com: May 02 11:08AM -0700

Jesus is a c++ compiler
 
I will be installing a confirmation system today that I will post through
in moving forward. It will be a link to my personal website (http://www.libsf.org)
which will hold an exact copy of the posted content I write on Usenet.
 
Anyone clicking the link can verify the content came from me by seeing some
confirmation information, and an exact copy of my post's content, and nobody
will be able to add content that appears to be from me with a valid link.
 
It will separate the true-Rick posts from the false ones. I will actually
begin writing my posts there, have it generate the format for me, and copy-
and-paste its generated content.
 
--
Rick C. Hodgin
rick.c.hodgin.rick.c.hodgin@gmail.com: May 02 11:06AM -0700

This post describes the life timeline of each person on Earth. People are
divided into two camps: saved, unsaved. This relates whether or not a
person receives forgiveness from Jesus Christ for their sin ... or not.
 
When a person is born:
 
You are ALREADY dead in sin. You ONLY know what your flesh feeds you. We
are more than our flesh. Eternal life is life of the spirit, not the flesh,
and because of sin we have ALREADY been judged and are no longer alive in
our spirit.
 
Below is the life cycle of each of us, for we are more than this flesh, and
we continue on after we leave this world:
 
Eventual Believer Non-believer
--------------------------------------- ---------------------------------
FLESH SPIRIT == FLESH SPIRIT
===== ====== == ===== ======
| (dead) == | (dead)
| || == | ||
(born) | (dead) == (born) | (dead)
|| | || == || | ||
(grow) | (dead) == (grow) | (dead)
|| | || == || | ||
+-------------------------------------+ == || | (dead)
| (accepts Christ as (born again) | == || | ||
| Savior and Lord) (alive) | == || | (dead)
| || || | == || | ||
| (live, work) (alive) | == (live, work) | (dead)
| || || | == || | ||
| (age) (alive) | == (age) | (dead)
| || || | == || | ||
| (die) (alive) | == (die) | (dead)
| || | == || | ||
| Note: Once the (rewards) | == (sleep) | (dead)
| believer accepts || | == | ||
| forgiveness of (alive with | == | (judgment)
| their sin, then God forever)| == | ||
| the flesh and || | == | (cast into
| the spirit are (Heaven) | == | Hell forever)
| one in the new || | == | ||
| believer's life. (alive) | == | (torment)
| || | == | ||
| God's Holy Spirit (joy) | == | (death)
| guides us in our || | == | ||
| flesh / natural (alive) | == | (torment)
| life, as a down || | == | ||
| payment of the (peace) | == | (death)
| full live we || | == | ||
| receive in His (alive) | == | (torment)
| Kingdom in all || | == | ||
| of eternity. (happiness) | == | (death)
| || | == | ||
| (alive) | == | (torment)
| || | == | ||
| (learning) | == | (death)
| || | == | ||
| (alive) | == | (torment)
| || | == | ||
| (growing) | == | (death)
| || | == | ||
| (alive) | == | (torment)
| || | == | ||
+-------------------------------------+ ---------------------------------

And it goes on. For those alive with Christ, exploring the universe,
learning ever more about His Kingdom. How much is there to know? How
deep is an infinite God? How vast is the Kingdom He's created here for
us to explore?
 
I would save people from that quick end of being cast into Hell and being
tormented forever, by teaching them to come to Jesus Christ and ask for
forgiveness for their sin.
 
You do have sin.
You do need forgiveness.
Jesus is ready to forgive you.
Ask Him to forgive you and make your spirit alive forever.
 
For those who are only born once (of the flesh), you will die twice
(once of the flesh, once of the spirit). But for those who are born
twice (once of the flesh, once of the spirit), you will only die once.
 
Both will receive an eternal body after we leave this world. But the
one who dies with their sin remaining charged to them will be cast into
Hell, because they never came to accept the truth, never came to accept
that Jesus Christ is truth, that He cleanses us from all unrighteousness.
 
Your eternal fate depends ENTIRELY upon what YOU do with Jesus Christ.
Ignore Him or accept His free offer of salvation ... the choice is yours.
 
--
Rick C. Hodgin
Sky89 <Sky89@sky68.com>: May 02 02:04PM -0400

Hello..
 
 
More precision..
 
Again about Lockfree algorithms
 
I wrote in my previous post about Lockfree algorithms,
i think that Lockfree algorithms without an appropriate Backoff
algorithm and a Jitter are much slower than Lock based algorithms(read
my previous posts to understand it), so you have to add the following
Backoff+Jitter and Lockfree algorithms will become faster than Lock
based algorithms, read the following carefully:
 
https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
 
 
So now i will finish soon my new FIFO queue that is node based and that
is "Waitfree" on the producer side and Lockfree on the consumer side,
it will use my new "scalable" reference counting and it will use
an efficient Backoff algorithm with an efficient Jitter to
be faster than Lock based algorithms.
 
And i will finish soon my new Lockfree LIFO stack that will use
an efficient Backoff algorithm with an efficient Jitter to be
faster than lock based algorithms.
 
They will be work with C++ and Delphi and FreePascal, so stay tuned !
 
 
Thank you,
Amine Moulay Ramdane.
rick.c.hodgin.rick.c.hodgin@gmail.com: May 02 11:04AM -0700

This post describes the life timeline of each person on Earth. People are
divided into two camps: saved, unsaved. This relates whether or not a
person receives forgiveness from Jesus Christ for their sin ... or not.
 
When a person is born:
 
You are ALREADY dead in sin. You ONLY know what your flesh feeds you. We
are more than our flesh. Eternal life is life of the spirit, not the flesh,
and because of sin we have ALREADY been judged and are no longer alive in
our spirit.
 
Below is the life cycle of each of us, for we are more than this flesh, and
we continue on after we leave this world:
 
Eventual Believer Non-believer
--------------------------------------- ---------------------------------
FLESH SPIRIT == FLESH SPIRIT
===== ====== == ===== ======
| (dead) == | (dead)
| || == | ||
(born) | (dead) == (born) | (dead)
|| | || == || | ||
(grow) | (dead) == (grow) | (dead)
|| | || == || | ||
+-------------------------------------+ == || | (dead)
| (accepts Christ as (born again) | == || | ||
| Savior and Lord) (alive) | == || | (dead)
| || || | == || | ||
| (live, work) (alive) | == (live, work) | (dead)
| || || | == || | ||
| (age) (alive) | == (age) | (dead)
| || || | == || | ||
| (die) (alive) | == (die) | (dead)
| || | == || | ||
| Note: Once the (rewards) | == (sleep) | (dead)
| believer accepts || | == | ||
| forgiveness of (alive with | == | (judgment)
| their sin, then God forever)| == | ||
| the flesh and || | == | (cast into
| the spirit are (Heaven) | == | Hell forever)
| one in the new || | == | ||
| believer's life. (alive) | == | (torment)
| || | == | ||
| God's Holy Spirit (joy) | == | (death)
| guides us in our || | == | ||
| flesh / natural (alive) | == | (torment)
| life, as a down || | == | ||
| payment of the (peace) | == | (death)
| full live we || | == | ||
| receive in His (alive) | == | (torment)
| Kingdom in all || | == | ||
| of eternity. (happiness) | == | (death)
| || | == | ||
| (alive) | == | (torment)
| || | == | ||
| (learning) | == | (death)
| || | == | ||
| (alive) | == | (torment)
| || | == | ||
| (growing) | == | (death)
| || | == | ||
| (alive) | == | (torment)
| || | == | ||
+-------------------------------------+ ---------------------------------

And it goes on. For those alive with Christ, exploring the universe,
learning ever more about His Kingdom. How much is there to know? How
deep is an infinite God? How vast is the Kingdom He's created here for
us to explore?
 
I would save people from that quick end of being cast into Hell and being
tormented forever, by teaching them to come to Jesus Christ and ask for
forgiveness for their sin.
 
You do have sin.
You do need forgiveness.
Jesus is ready to forgive you.
Ask Him to forgive you and make your spirit alive forever.
 
For those who are only born once (of the flesh), you will die twice
(once of the flesh, once of the spirit). But for those who are born
twice (once of the flesh, once of the spirit), you will only die once.
 
Both will receive an eternal body after we leave this world. But the
one who dies with their sin remaining charged to them will be cast into
Hell, because they never came to accept the truth, never came to accept
that Jesus Christ is truth, that He cleanses us from all unrighteousness.
 
Your eternal fate depends ENTIRELY upon what YOU do with Jesus Christ.
Ignore Him or accept His free offer of salvation ... the choice is yours.
 
--
Rick C. Hodgin
Sky89 <Sky89@sky68.com>: May 02 01:24PM -0400

Hello..
 
 
Again about Lockfree algorithms
 
I wrote in my previous post about Lockfree algorithms,
i think that Lockfree algorithms without an appropriate Backoff
algorithm and a Jitter are not efficient, so you have to add the
following Backoff+Jitter and Lockfree algorithms will become more
efficient than Lock based algorithms, read the following carefully:
 
https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
 
 
Thank you,
Amine Moulay Ramdane.
Jorgen Grahn <grahn+nntp@snipabacken.se>: May 02 01:25PM

On Tue, 2018-05-01, Daniel wrote:
> };
 
> Now, suppose it's desirable for our users that including A.hpp also brings
> in B, and we particularly want to keep the name A for that header file.
 
Sounds a bit speculative. Any real-life examples?
 
You have people using A, and you introduce B which depends on A.
Seems to me if people are explicitly interested in B, they might as
well #include B.h.
 
I also believe it's not an error to define both A and B in A.h,
if B is some kind of utility closely related to A.
 
 
> #include <detail/A.hpp>
> #include <B.hpp>
 
> Comments?
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Daniel <danielaparker@gmail.com>: May 02 08:32AM -0700

On Wednesday, May 2, 2018 at 9:26:07 AM UTC-4, Jorgen Grahn wrote:
 
> > Now, suppose it's desirable for our users that including A.hpp also brings
> > in B, and we particularly want to keep the name A for that header file.
 
> Sounds a bit speculative. Any real-life examples?
 
Sure.
 
"A.hpp" is https://github.com/danielaparker/jsoncons/blob/master/include/jsoncons/json.hpp
 
B.hpp is https://github.com/danielaparker/jsoncons/blob/master/include/jsoncons/json_convert_traits.hpp
 
Examples:
 
https://github.com/danielaparker/jsoncons/blob/master/doc/ref/encode_json.md
 
I'm currently including B.hpp (json_convert_traits.hpp) at the bottom of A.hpp (json.hpp.)
 
Thanks for commenting.
 
Daniel
Sky89 <Sky89@sky68.com>: May 01 07:52PM -0400

Hello...
 
 
Becareful of Lockfree algorithms
 
Look at how they are "much" slower than lock based algorithms,
read the following from acmqueue to notice it:
 
==
 
In practice, however, lock-free algorithms may not live up to these
performance expectations. Consider, for example, Michael and Scott's
lock-free queue algorithm.11 This algorithm implements a queue using a
linked list, with items enqueued to the tail and removed from the head
using CAS loops. (The exact details are not as important as the basic
idea, which is similar in spirit to the example in figure 4.) Despite
this, as figure 6a shows, the lock-free algorithm fails to scale beyond
four threads and eventually performs worse than the two-lock queue
algorithm.
 
The reason for this poor performance is CAS failure: as the amount of
concurrency increases, so does the chance that a conflicting CAS gets
interleaved in the middle of a core's read-compute-update CAS region,
causing its CAS to fail. CAS operations that fail in this way pile
useless work on the critical path. Although these failing CASes do not
modify memory, executing them still requires obtaining exclusive access
to the variable's cache line. This delays the time at which later
operations obtain the cache line and complete successfully (see figure
5b, in which only two operations complete in the same time that three
operations completed in figure 5a).
 
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=2991130
 
==
 
 
 
Thank you,
Amine Moulay Ramdane.
"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: May 01 06:27PM -0700

On 5/1/2018 4:52 PM, Sky89 wrote:
 
> Becareful of Lockfree algorithms
 
> Look at how they are "much" slower than lock based algorithms,
> read the following from acmqueue to notice it:
 
Efficient well designed lock-free is much slower than its mutex based
equivalent? Really, on what planet? Wow.
 
[...]
cross@spitfire.i.gajendra.net (Dan Cross): May 02 02:35AM

>Hello...
 
>[snip]
 
What does ths have to do with C++?
 
- Dan C.
red floyd <no.spam@its.invalid>: May 02 07:19AM -0700

On 05/01/2018 07:35 PM, Dan Cross wrote:
 
>> [snip]
 
> What does ths have to do with C++?
 
> - Dan C.
 
The idiot's a spammer and a crank. Just killfile him.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: