Saturday, May 5, 2018

Digest for comp.lang.c++@googlegroups.com - 6 updates in 3 topics

Lynn McGuire <lynnmcguire5@gmail.com>: May 05 05:45PM -0500

"The beets blog: our solution for the hell that is filename encoding,
such as it is."
http://beets.io/blog/paths.html
 
"By far, the worst part of working on beets is dealing with filenames.
And since our job is to keep track of your files in the database, we
have to deal with them all the time. This post describes the filename
problems we discovered in the project's early days, how we address them
now, and some alternatives for the future."
 
As having traveled down this road very recently in our combined C++ and
Fortran code, this is very interesting.
 
Lynn
"Öö Tiib" <ootiib@hot.ee>: May 05 02:37AM -0700

On Thursday, 3 May 2018 00:21:56 UTC+3, Lynn McGuire wrote:
> https://queue.acm.org/detail.cfm?id=3212479
 
> "Your computer is not a fast PDP-11."
 
> Sigh, another proponent of "C sucks".
 
Thanks, I had strong feeling of becoming stupider as result of reading
that strange rant. Wasn't it irrelevant if violator and victim processes
of Meltdown/Spectre family of vulnerabilities were written in JavaScript
or in C? The author seemed to imply that it all was somehow fault of C. :/
 
My understanding so far was that the manufacturers of processors have
done their branch predictions, speculative executions and out-of-order
executions in dirty manner that leave side effects. If to carefully
dig that dirt then it is possible to get data of other processes or
kernel out of it. Attempts to compensate such vulnerabilities on software
level have been controversial.
"As it is, the patches are COMPLETE AND UTTER GARBAGE."
Linus Torvalds https://lkml.org/lkml/2018/1/21/192
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: May 05 01:05PM +0200

On 05.05.2018 11:37, Öö Tiib wrote:
> that strange rant. Wasn't it irrelevant if violator and victim processes
> of Meltdown/Spectre family of vulnerabilities were written in JavaScript
> or in C? The author seemed to imply that it all was somehow fault of C. :/
 
The author's main point seems to be that there is an architectural
monopoly situation for main processing, similar to (my analogy) DOS that
was horrible but dominated because for most users it was good enough and
there was a supporting infrastructure of knowledge and tools and
compatibility with others. He just notes (rightly) that C is adapted to
that architecture, and (incorrectly) that current programming practices
are adapted to C. He never mentions "von Neumann architecture", which is
what it is: the blame for the idea of total centralization of
processing, a conceptual single thread, should be placed on the Moore
school and Johann von Neumann, roughly 1946. Well, if we don't go
further back to Euclid, some 300 BC or thereabouts.
 
 
> level have been controversial.
> "As it is, the patches are COMPLETE AND UTTER GARBAGE."
> Linus Torvalds https://lkml.org/lkml/2018/1/21/192
 
I wonder what became of all the wonderful parallel processing ideas of
the 1980's, like Linda and Lucid. I liked those. :)
 
Cheers!,
 
- Alf
woodbrian77@gmail.com: May 05 12:19PM -0700

On Saturday, May 5, 2018 at 6:05:18 AM UTC-5, Alf P. Steinbach wrote:
> processing, a conceptual single thread, should be placed on the Moore
> school and Johann von Neumann, roughly 1946. Well, if we don't go
> further back to Euclid, some 300 BC or thereabouts.
 
Call me old school, but single threaded applications are still
cool in 2018.
 
 
Brian
Ebenezer Enterprises
http://webEbenezer.net
Christian Gollwitzer <auriocus@gmx.de>: May 05 07:40AM +0200

>> support for fixed-point math.
 
> And thats my whole point about having fixed point floats being native to
> a language instead of having to do calculations using integers.
 
You keep repeating this, but I don't understand why you need fixed point
"native" to the langugage? In C++, surely you wouldn't do the
application logic using explicit integers, but write your own
fixed-point class and use that. I.e., instead of
 
int64_t euro = 1000; // have to remember that there is a scale of 1000
int64_t my_balance = 10860;
my_balance += 30*euro;
std::cout << "My balance has now "<< my_balance / euro << "." <<
setfill('0')<< setw(3)<< my_balance % euro;
 
you'd do
 
#include "fpmath.hpp"
 
fp64 my_balance = "10.86";
my_balance += 30;
std::cout << "My balance has now"<< my_balance <<"€";
 
 
where fpmath.hpp is written once and forever and used throughout the
program.
 
Christian
Rosario19 <Ros@invalid.invalid>: May 05 11:59AM +0200

On Fri, 4 May 2018 20:30:37 +0200, Dombo wrote:
 
>I leave it as an exercise for you to figure out how many bits would
>required to do the same thing with fixed point numbers without loosing
>accuracy compared to the floating point implementation.
 
your forgot to say:
 
1) the range are considered correct result or something reflect that
2) the value of voltage or more value of voltage
3) the C++ compiler result
 
in Axiom with 200 digits precision, for a voltage=2.3 result for

f(voltage)==h/sqrt(e* voltage *m*(e*voltage/(m*c*c)+2.0))
 
(19) -> f 2.3
Compiling function f with type Float -> Float
(19)
0.8086804205 7457036769 1013364305 2562414446 4716385376 9984367043
5205653260 0350183091 3518975795 1432982356 8746726538 0519062259
4038078110 9262350697 2465422215 3892230230 3051688329 3822066680
4248641341 6150608215 E -9
 
because c is 2.99e8[c*c is not better... 1e16 seems to me] and not
somethin as 6.6e-20 as others i would say that the C++ ieee float
point result: ***could be meaningless*** or effected error in many
digits
 
i'm for fixed poing float
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: