Friday, May 4, 2018

Digest for comp.lang.c++@googlegroups.com - 14 updates in 9 topics

"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 04 01:54AM +0200

On 03.05.2018 22:57, Vir Campestris wrote:
> neatly solves the cyclic graph problem Siri Cruise mentioned.
 
> A lot of small programs terminate before they need to run GC - and for
> them it's probably faster.
 
It might be a tragedy of the commons. For each individual program it can
be faster to just leak memory, provided it terminates before running out
of memory. But when most or all programs do that, I can imagine that
things will run much more slowly on the whole.
 
So that brings in the idea of government (OS) regulation and compliance
enforcement.
 
Everything is politics. :-p
 
 
Cheers!,
 
- Alf
Jorgen Grahn <grahn+nntp@snipabacken.se>: May 04 11:21AM

On Thu, 2018-05-03, Vir Campestris wrote:
> On 02/05/2018 21:15, Sky89 wrote:
>> Is reference counting slower than GC?
 
> That depends. On lots of things. This is pretty widely studied.
 
It's worth noting that C++ code typically uses neither. (Specifically,
a few std::shared_ptr here and there, versus Python's "everything is a
refcounted object".)
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
"Chris M. Thomasson" <invalid_chris_thomasson@invalid.invalid>: May 04 01:05PM -0700

On 5/2/2018 1:15 PM, Sky89 wrote:
> Hello..
 
> Is reference counting slower than GC?
 
[...]
 
What type of reference counting? Proxy reference counting can get around
cycles because each object does not need to reference one another. They
are confined within a proxy count where each object does not need its on
counter.
boltar@cylonHQ.com: May 04 08:34AM

On Thu, 3 May 2018 12:46:30 -0700 (PDT)
 
>It's hard to represent really small quantities with acceptable accuracy
>using only 20 decimal places - for any number less than 1e-20 (and
>engineering often involves numbers that small or smaller), it's
 
Does it? Give a real world engineering example that requires accuracy to
20 decimal places. I suspect even space probe navigation isn't that accurate.
 
>Matrix operations, in particular, involve so many multiplies and adds
>that single precision roundoff errors would make the results unusable
>for even relatively small matrices, such as 10x10.
 
If you're going to suggest that floating point somehow alleviates the problem
of chaos in calculations that would otherwise occur with fixed point then
I think you're on to a bit of a loser.
 
>you were doing with integers is essentially equivalent to fixed-point
>math, but more complicated than would be possible with language-level
>support for fixed-point math.
 
And thats my whole point about having fixed point floats being native to
a language instead of having to do calculations using integers.
 
>contradict that claim. Do you? However, I suspect that it reflects a
>lack of familiarity with the scientific and engineering communities on
>your part. Some of the biggest and most powerful machines in the world
 
I'll grant you that I don't work in that sphere, but I do know that C & C++
are a long way from being the most popular languages there.
 
>crunch numbers 24 hours a day to perform tasks like weather prediction
>and quantum field theory calculations, and a lot of that code is written
>in C nowadays.
 
And a lot isn't.
 
 
>A 64 bit floating point type has all the precision I need for most of
>the work I do. However, a 64 bit fixed point format like the one you
>advocated wouldn't even come close to being adequate.
 
So a 64 bit int + 64 bit fractional part isn't enough for you? Wtf are you
calculating? A 64 bit u_long has a max of 18446744073709551615 which is 20
digits , of which 19 would be usable as a fractional part. Are you seriously
suggesting 19 decimal places isn't enough for your job??
 
>> so much of it? You think they use floats?
 
>No, the banks don't do very much of it. Creating all of the financial
>reports needed by anyone in the world on all of the financial
 
Financial reports?? Thats the tip of the iceberg my friend. How the hell do you
think automated trading works not to mention all the other investment
instrument calculations done all the time?
 
>transactions performed annually world wide requires a much smaller
>number of mathematical operations than a single day's worth of weather
>forecasting simulations.
 
Utter BS.
 
>that way by most implementations.
>If the language had been designed by physical scientists, it would have
>had complex math from the beginning, rather than waiting until C99.
 
I wasn't talking about C per se, I was talking about the standard representation
of floating point numbers.
Robert Wessel <robertwessel2@yahoo.com>: May 04 11:24AM -0500

>>engineering often involves numbers that small or smaller), it's
 
>Does it? Give a real world engineering example that requires accuracy to
>20 decimal places. I suspect even space probe navigation isn't that accurate.
 
...
 
>calculating? A 64 bit u_long has a max of 18446744073709551615 which is 20
>digits , of which 19 would be usable as a fractional part. Are you seriously
>suggesting 19 decimal places isn't enough for your job??
 
 
He didn't say that. He said that having to deal with inputs that are
20 orders of magnitude apart is common. The precision provided by ~64
bits of whatever format is usually adequate, but the range is not.
Dombo <dombo@disposable.invalid>: May 04 08:30PM +0200

>> engineering often involves numbers that small or smaller), it's
 
> Does it? Give a real world engineering example that requires accuracy to
> 20 decimal places. I suspect even space probe navigation isn't that accurate.
 
The reason for using floating point is to deal with a large dynamic
range, without having to sacrifice relative accuracy at one end of the
range.
 
<snip>
> calculating? A 64 bit u_long has a max of 18446744073709551615 which is 20
> digits , of which 19 would be usable as a fractional part. Are you seriously
> suggesting 19 decimal places isn't enough for your job??
 
Here is a piece of code used in a real product for you:
 
const double h = 6.62607004e-34;
const double e = 1.60217662e-19;
const double m = 9.10938356e-31;
const double c = 2.99792458e8;
 
return h / std::sqrt(e * voltage * m * (e * voltage / (m * c*c) + 2.0));
 
I leave it as an exercise for you to figure out how many bits would
required to do the same thing with fixed point numbers without loosing
accuracy compared to the floating point implementation.
Sky89 <Sky89@sky68.com>: May 04 02:01AM -0400

Hello..
 
I correct some typos because i write fast, please read again..
 
We have to be smart and get on more precision by using logic and measure..
 
You have seen me writing my previous post, but with more precision i
have to define more what i mean by "general" purpose, so like in
philosophy, if we are like rationalism, we will neglect empiricism,
this is the main point also, so we have to know how to be rationalism
and also empiricism, and this is inherent to my definition of what
is general purpose, for example when i say Lockfree algorithms are not
general purpose , first, i mean that the understanding of the mechanism
constrains the definition of general purpose by meaning
that general purpose is general purpose of what is "Lockfree",
so we have not to think the "set" as being not constrained by
the definition of Lockfree, so when i say that Lockfree algorithms
are not general purpose , i mean since there is bigger disadvantages of
Lockfree algorithms, so we can not call them general purpose, and this
apply to compilers or interpreters, if we notice that they come with
bigger noticable disadvantages , so we can start to call them as not
being general purpose, this is my meaning and definition of general
purpose, so this is why you are noticing that we are today learning
many programming languages, because we are understanding the
disadvantages of this or that programming languages, and this
makes us understand that this bigger "complexity" of learning many
programming languages and tools etc. is like returning back to
the complexity of programming with assembler, so now you are capable of
understanding my previous post, here it is:
 
About computer programming..
 
Since we are not capable of designing a compiler or interpreter or the
like that are more suitable for everything in programming, i mean that
are general purpose, and since for example Lockfree algorithms are not
general because they have there advantages and disadvantages etc. so
computer programming today is like returning back more to assembler, i
mean it is like returning back to much more "complexity", for example
you have to be "aware" of the "inside" of Lockfree algorithms to be able
to say that Lockfree algorithms are faster than Lock based algorithms
when there is more and more context switches between threads that uses
the Lockfree algorithms, and by understanding them you have to be
capable of applying them to reality more efficiently because they are
not like general purpose, So now you are understanding my main point,
this is why i wrote before that:
 
Here is the main very important point in computer programming:
 
I think the main point is that you have not to think to work with just
Java or with just Lockfree algorithms and not Lock based algorithms, now
you are able to see my point of view that Java or Lockfree algorithms
are not "general" purpose, so you have to start to enlarge your
conception of our world and be more diversified in computer programming
, this is what we call responsability today.
 
 
Thank you,
Amine Moulay Ramdane.
 
First
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: May 04 03:06PM +0100

On Fri, 4 May 2018 02:01:46 -0400
 
> Hello..
 
> I correct some typos because i write fast, please read again..
[crap snipped]
 
You write fast because you are ill, and are going through a manic phase.
 
As I have previously pointed out, you are also a fuckwit. You post
pointless corrections to pointless corrections to posts which are always
drivel and usually off topic. Your views are worthless junk, you know
very little about C++ and no one takes you seriously.
 
You said a month or so ago when you making a similar idiot of yourself
that you would stop posting to this news group. You need to be good
to your word, at least until you get a minimum of understanding of C++
and restrict your posting to C++ issues here.
 
Go away.
Real Troll <real.troll@trolls.com>: May 03 10:10PM -0400

Has anybody noticed that the guy from Morocco has stopped taking his
meds and now he has become paranoid and posting hundreds of messages a
day. He is posting on C++ newsgroup <comp.lang.c++> as well on Delphi
newsgroup <alt.comp.lang.borland-delphi>.
 
Can somebody arrange to send him the meds because it looks like he has
run out of them and his country can't afford to import from the
developed countries.
Sky89 <Sky89@sky68.com>: May 04 01:55AM -0400

Hello...
 
We have to be smart and get on more precision by using logic and measure..
 
You have seen me writing my previous post, but with more precision i
have to define more what i mean by "general" purpose, so like in
philosophy, if we are like rationalism, we will neglect empiricism,
this is the main point also, so we have to know how to be rationalism
and also empiricism, and this is inherent to my definition of what
is general purpose, for example when i say Lockfree algorithms are not
general purpose , first, i mean that the understanding of the mechanism
constrain the definition of general purpose by meaning
that general purpose is general purpose of what is "Lockfree",
so we have not to think the "set" as being not constrained by
the definition of Lockfree, so when i say that Lockfree algorithms
are not general purpose , i mean since there is bigger disadvantages of
Lockfree algorithms, so we can not call them general purpose, and this
apply to compilers or interpreters, if we notice that they come with
bigger noticable disadvantages , so we can start to call them as not
being general purpose, this is my meaning and definition of general
purpose, so this is why you are noticing that we are today learning
many programming languages, because we are understanding the
disadvantages of this or that programming languages, and this
makes us understand that this bigger "complexity" of learning many
programming languages and tools etc. is like returning back to
the complexity of programming assembler, so now you are capable of
understanding my previous post, here it is:
 
About computer programming..
 
Since we are not capable of designing a compiler or interpreter or the
like that are more suitable for everything in programming, i mean that
are general purpose, and since for example Lockfree algorithms are not
general because they have there advantages and disadvantages etc. so
computer programming today is like returning back more to assembler, i
mean it is like returning back to much more "complexity", for example
you have to be "aware" of the "inside" of Lockfree algorithms to be able
to say that Lockfree algorithms are faster than Lock based algorithms
when there is more and more context switches between threads that uses
the Lockfree algorithms, and by understanding them you have to be
capable of applying them to reality more efficiently because they are
not like general purpose, So now you are understanding my main point,
this is why i wrote before that:
 
Here is the main very important point in computer programming:
 
I think the main point is that you have not to think to work with just
Java or with just Lockfree algorithms and not Lock based algorithms, now
you are able to see my point of view that Java or Lockfree algorithms
are not "general" purpose, so you have to start to enlarge your
conception of our world and be more diversified in computer programming
, this is what we call responsability today.
 
 
Thank you,
Amine Moulay Ramdane.
 
First
Sky89 <Sky89@sky68.com>: May 04 01:18AM -0400

Hello...
 
 
About computer programming..
 
Since we are not capable of designing a compiler or interpreter or the
like that are more suitable for everything in programming, i mean that
are general purpose, and since for example Lockfree algorithms are not
general because they have there advantages and disadvantages etc. so
computer programming today is like returning back more to assembler, i
mean it is like returning back to much more "complexity", for example
you have to be "aware" of the "inside" of Lockfree algorithms to be able
to say that Lockfree algorithms are faster than Lock based algorithms
when there is more and more context switches between threads that uses
the Lockfree algorithms, and by understanding them you have to be
capable of applying them to reality more efficiently because they are
not like general purpose, So now you are understanding my main point,
this is why i wrote before that:
 
Here is the main very important point in computer programming:
 
I think the main point is that you have not to think to work with just
Java or with just Lockfree algorithms and not Lock based algorithms, now
you are able to see my point of view that Java or Lockfree algorithms
are not "general" purpose, so you have to start to enlarge your
conception of our world and be more diversified in computer programming
, this is what we call responsability today.
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: May 04 12:57AM -0400

Hello..
 
 
Here is the main very important point in computer programming:
 
I think the main point is that you have not to think to work with just
Java or with just Lockfree algorithms and not Lock based algorithms, now
you are able to see my point of view that Java or Lockfree algorithms
are not "general" purpose, so you have to start to enlarge your
conception of our world and be more diversified in computer programming
, this is what we call responsability today.
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: May 04 12:41AM -0400

Hello...
 
About Lockfree algorithms
 
I am more experienced now, and i will give my point of view on Lockfree
and algorithms:
 
Lockfree algorithms are bad because they are prone to starvation or to
much longer waiting time for a number of threads, they are faster than
Lock based algorithms when there is more context switches between
threads, but they are prone to starvation or to much longer waiting time
for a number of threads, and this is not good, so as you notice that
Lockfree and Lock based algorithms have there advantages and disadvantages.
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: May 03 11:27PM -0400

Hello..
 
Here is a very interesting paper:
 
Are Lock-Free Concurrent Algorithms Practically Wait-Free?
 
This paper suggests a simple solution to this problem. We show that, for
a large class of lock- free algorithms, under scheduling conditions
which approximate those found in commercial hardware architectures,
lock-free algorithms behave as if they are wait-free. In other words,
programmers can keep on designing simple lock-free algorithms instead of
complex wait-free ones, and in practice, they will get wait-free progress.
 
Read more here:
 
https://arxiv.org/abs/1311.3200
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: