Thursday, January 31, 2019

Digest for comp.lang.c++@googlegroups.com - 21 updates in 3 topics

Bart <bc@freeuk.com>: Jan 30 11:37PM

On 30/01/2019 22:18, Melzzzzz wrote:
 
>> void f(){g("two"); g(&f);}
>> void g(float x){};
 
> No. You don't have type check. I meant no need for prototypes.
 
Which you get with out of order declarations. I solved this long ago in
my own languages, and wouldn't be able to do without the feature now.
 
However, while it was easy to solve within one module (it just needs an
extra pass compared with C or C++), dealing with mutually imported
modules is another matter.
 
>> has modules:
> This problem is solved in other languages. You either parse all library
> files as single file, or ....
 
My approach [in my language] works when all source files comprising the
program unit (single executable or library file) are compiled at the
same time, sharing a global symbol table (the output is the executable).
 
Parsing is performed across all modules before proceeding to the next
stage, which is resolving names. And that process can now cross between
modules because everything is in memory.
 
(For external libraries it still relies on manually created files of
declarations, necessary anyway as most will be in a different language.)
 
There are still a number of problems, one of which is this: with a
purely hierarchical import structure, it is easy to construct a
load-order for the modules. That means that when modules contain
initialisation routines, all those can automatically be called in the
right order.
 
But with circular imports, that load-order is not determinable. If A
imports B, and B imports A, whose initialisation routine do you call
first? Each may require the other to have been set up already. You have
to sort it out manually.
 
--
bart
Melzzzzz <Melzzzzz@zzzzz.com>: Jan 30 10:19PM

> "make" (or ninja, or whatever) will handle it fine. If you have enough
> modules for the build speed and parallel compilation to matter, there
> will be plenty of opportunity for parallelising.
 
No problems with make. Compilers must accomodate.
 
--
press any key to continue or any other to quit...
Paavo Helde <myfirstname@osa.pri.ee>: Jan 31 08:14AM +0200

On 31.01.2019 1:37, Bart wrote:
> my own languages, and wouldn't be able to do without the feature now.
 
> However, while it was easy to solve within one module (it just needs an
> extra pass compared with C or C++)
 
C++ is perfectly capable of doing the needed extra pass by itself, it
just needs a little syntax sugar:
 
struct mynamespace {
 
static void f(double x) { g(x-1); }
 
static void g(float x) { if (x>0) { f(x); }}
 
};
 
int main() {
mynamespace::f(100);
}
David Brown <david.brown@hesbynett.no>: Jan 31 08:23AM +0100

On 31/01/2019 00:37, Bart wrote:
 
>> No. You don't have type check. I meant no need for prototypes.
 
> Which you get with out of order declarations. I solved this long ago in
> my own languages, and wouldn't be able to do without the feature now.
 
C, C++ and many other languages support declarations (or "prototypes",
or "forward declarations" - the terminology varies by language) before
definitions. That solves any circular dependency issues within a module
quite happily. Co-recursive functions are a rarity in code.
 
Some people like to organise their files "backwards" - they like to
define a function first, then further down in the file they define the
functions that it calls. That is fine, but will mean you need more
separate prototype declarations in C. For people who like that code
arrangement, C's methods are a little inconvenient. (As Paavo points
out, C++ no longer needs that if you feel it is an issue.)
 
 
> However, while it was easy to solve within one module (it just needs an
> extra pass compared with C or C++), dealing with mutually imported
> modules is another matter.
 
Ordering is not a problem in the slightest for modules. Mutually
dependent imports (circular imports) are an issue, but these can almost
always be avoided in your code structuring.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jan 31 06:47AM -0500

On 1/31/19 02:23, David Brown wrote:
...
> C, C++ and many other languages support declarations (or "prototypes",
> or "forward declarations" - the terminology varies by language) before
> definitions.
 
While the terminology does vary by language, neither C nor C++ use the
term "prototype" to make such a distinction. Do you know of some other
language where it is used for that purpose? In C and C++, a prototype
can occur in either a forward declaration or in the declaration that
appears at the start of a function definition. In C, a forward
declaration need not be a prototype, and the same is true of the
declaration which appears a the start of a function definition. "forward
declaration" and "defining declaration" is a distinction that's
completely independent of the distinction between "prototyped
declaration" and "non-prototyped declaration" (with the latter appearing
only in C).
Bart <bc@freeuk.com>: Jan 31 12:06PM

On 31/01/2019 07:23, David Brown wrote:
> or "forward declarations" - the terminology varies by language) before
> definitions.  That solves any circular dependency issues within a module
> quite happily.
 
That doesn't solve it at all. Actually it is /part/ of the problem we're
trying to solve! And sometimes you can't neatly get around it:
 
enum {a,b,c=y,d};
enum {w=b,x,t,z};
 
You can't reorder these. And you can't split them up if a,b,c,d and
w,x,y,z belong together, and you still want to rely on auto-increment.
But it's easy to see that a,b,c,d should be 0,1,3,4 and w,x,y,z is
1,2,3,4 (see sig).
 
Any 'solutions' will spoil the code.
 
> Co-recursive functions are a rarity in code.
 
Co-recursion has some technical meaning which I'm not sure is the one
that is relevant here. If you mean that A() calls B() which calls A(),
then I write such code all the time.
 
> separate prototype declarations in C.  For people who like that code
> arrangement, C's methods are a little inconvenient.  (As Paavo points
> out, C++ no longer needs that if you feel it is an issue.)
 
With an extra construct? Having this natively possible anywhere in a
language, without having to worry about such things at all, is more
desirable:
 
E readexpr(void) { ... return readterm();}
E readterm(void) { ... return readexpr();}
 
 
> Mutually
> dependent imports (circular imports) are an issue, but these can almost
> always be avoided in your code structuring.
 
I've had experience of this and often found it difficult to impossible.
99% of functions would fit into the hierarchy, but there were always odd
ones that didn't.
 
Trying to extract things out usually turned into a can of worms, and you
ended up with obviously contrived code, ruining the tidy structure of
your project.
 
(My solutions tended to use ad hoc manual declarations, circumventing
the module system, but the compiler could not check that my manual
declaration matched the actual definition of a function.)
 
Again, this itself turns into the problem that needs to be fixed. You
should be able to just write things in the most natural manner. If it
makes sense to the person reading or writing the code, it should make
sense to a compiler.
 
--
bart
 
enum (a=0,b,c=y,d) # usually 1-based
enum (w=b,x,y,z)
 
println a,b,c,d, w,x,y,z
 
 
Output:
 
0 1 3 4 1 2 3 4
 
This however doesn't work:
 
enum (a=b, b=a)
"Öö Tiib" <ootiib@hot.ee>: Jan 31 06:42AM -0800

On Thursday, 31 January 2019 13:47:41 UTC+2, James Kuyper wrote:
> completely independent of the distinction between "prototyped
> declaration" and "non-prototyped declaration" (with the latter appearing
> only in C).
 
The "prototypes" are typically denoting alternative to "classes"
in OOP languages with dynamic types. There instead of inheritance
we just extend objects directly and instead of instantiations
of classes we just clone prototype objects. That has not much
to do with C or C++, closest such language is JavaScript.
David Brown <david.brown@hesbynett.no>: Jan 31 04:38PM +0100

On 31/01/2019 12:47, James Kuyper wrote:
> completely independent of the distinction between "prototyped
> declaration" and "non-prototyped declaration" (with the latter appearing
> only in C).
 
Fair enough - that is a lot more precise than I was (and, I think more
precise than necessary). I referred to "prototypes" merely because some
people like to give a function prototype as a forward declaration, and
then omit the function parameters (if any) later in the definition.
David Brown <david.brown@hesbynett.no>: Jan 31 04:50PM +0100

On 31/01/2019 13:06, Bart wrote:
> trying to solve! And sometimes you can't neatly get around it:
 
>   enum {a,b,c=y,d};
>   enum {w=b,x,t,z};
 
It solves it for functions and types. It has never occurred to me that
anyone would have need of re-ordering enums like that. I think it could
be hard to get any solution here that does not involve multiple rounds
or general equation solving, which is hardly appropriate for a language
like C or C++. (You can do it in metafont or metapost, but those are
very different kinds of language.)
 
 
> Co-recursion has some technical meaning which I'm not sure is the one
> that is relevant here. If you mean that A() calls B() which calls A(),
> then I write such code all the time.
 
Mutual recursion is the term I wanted. There can be cases where this is
useful, in handling recursive data structures, but I think if you are
writing it a lot you have questionable coding structures. (Not
necessarily wrong, merely questionable.) However, the point is that is
only with such mutual recursion that you need to have forward
declarations for your functions - and it is not a hardship to use this
on occasion. It would be annoying to have to use forward declarations
for a large proportion of functions, but not for just a few.
 
> desirable:
 
>  E readexpr(void) { ... return readterm();}
>  E readterm(void) { ... return readexpr();}
 
I quite agree that it would be convenient to be able to do this without
forward declarations, and I can well understand allowing it in a new
language. I just don't think it is a big issue - like many people, I
have programmed in C (and other languages that need forward
declarations) for many years without feeling this to be a concern.
 
 
>> Mutually dependent imports (circular imports) are an issue, but these
>> can almost always be avoided in your code structuring.
 
> I've had experience of this and often found it difficult to impossible.
 
Maybe you have a fundamentally different way of organising code than I
do. (I know we have quite different types of code). It simply doesn't
occur in my coding - either embedded C, C++ and assembly programming, or
PC programming in Python, Pascal, etc.
 
jameskuyper@alumni.caltech.edu: Jan 31 08:02AM -0800

On Thursday, January 31, 2019 at 10:38:30 AM UTC-5, David Brown wrote:
> precise than necessary). I referred to "prototypes" merely because some
> people like to give a function prototype as a forward declaration, and
> then omit the function parameters (if any) later in the definition.
 
Could you demonstrate the technique you're talking about? I suspect that you've not describing it correctly. If function parameters are present in a forward declaration of a function, but omitted from the corresponding function definition, then:
 
1. In C++ you've defined an overload of the function with a different signature from the one used in the forward declaration - calling the function with arguments that match the forward declaration would result in a search (which would presumably fail) for a different overload.
 
2. In C, you've defined a function that's incompatible with the forward declaration. That's a constraint violation if they have the same scope (6.7p3), and the behavior is undefined (6.2.7p2), regardless of scope.
Bart <bc@freeuk.com>: Jan 31 08:42PM

On 31/01/2019 15:50, David Brown wrote:
 
> Mutual recursion is the term I wanted. There can be cases where this is
> useful, in handling recursive data structures, but I think if you are
> writing it a lot you have questionable coding structures.
 
No it's not questionable. Most of the big programs I've written have
involved recursive data. (CAD systems, windows-based GUIs, compilers and
interpreters. Even file systems are recursive.) Obviously yours don't.
 
(Not
> declarations for your functions - and it is not a hardship to use this
> on occasion. It would be annoying to have to use forward declarations
> for a large proportion of functions, but not for just a few.
 
I had to do that for years. I can tell you that things are much sweeter
if you don't have to bother. (It's quite a problem too if you have one
language which or may not need forward declarations that has to be
expressed in one which has its own rules for them.)
 
> language. I just don't think it is a big issue - like many people, I
> have programmed in C (and other languages that need forward
> declarations) for many years without feeling this to be a concern.
 
I like to add functions in any order. I shouldn't have to consider that
the functions one may call are always defined earlier, or that the
functions that might call this one must be defined later. How do you
even find that spot in a busy module?
 
And you might want to move things around, or copy code between files.
 
 
 
>> I've had experience of this and often found it difficult to impossible.
 
> Maybe you have a fundamentally different way of organising code than I
> do.
 
Let's take one project of mine (not C code obviously as it uses
modules), and look at two modules mc_name and mc_type (two successive
passes of a compiler). mc_name has this at the top:
 
import mc_type
 
And mc_type has this:
 
import mc_name.
 
Mutual imports. According to you, I should be able to trivially fix
that. Those modules ought to be independent, but sometimes things aren't
perfect.
 
mc_name calls one function in mc_type, but I can't just extract that
into a third module, as it has numerous connections to the rest of the
module. And it's the same story the other way. And unless I have /two/
extra new modules, I would be mixing up the gubbins from what are
supposed to be two unrelated modules, and exposing even more stuff that
is supposed to be kept private to each.
 
One more project where the two modules are cc_lex and cc_lib (a C
compiler). And it's the same thing where rearranging things so that the
modules are a pure hierarchy would mean tearing the project apart and
having a weird module structure that now needs an explanation.
 
(As I said, I solved this in the past by just adding a few manual
function declarations to cut across module demarcations. Now it is
easier for them just to import each other. It works fine.)
Melzzzzz <Melzzzzz@zzzzz.com>: Jan 31 09:07PM


> No it's not questionable. Most of the big programs I've written have
> involved recursive data. (CAD systems, windows-based GUIs, compilers and
> interpreters. Even file systems are recursive.) Obviously yours don't.
 
List is recursive data structure, so as tree and any graph for that
matter... are you sure you are arguing right thing?
 
--
press any key to continue or any other to quit...
Thomas Koenig <tkoenig@netcologne.de>: Jan 31 09:17PM

>>    https://vector-of-bool.github.io/2019/01/27/modules-doa.html
 
>> Cool, the author compared C++ to Fortran !
 
> Not in a way that said something good about C++ modules. :(
 
Not that there are a lot of good things to be said about the C++
module proposal. The question of which module is defined
in which file (well, you could put it into a Makefile when you
write it) is only one aspect. Another aspect is the question
of how to handle #defines which are in force when a module is
used...
Bart <bc@freeuk.com>: Jan 31 09:42PM

On 31/01/2019 21:07, Melzzzzz wrote:
>> interpreters. Even file systems are recursive.) Obviously yours don't.
 
> List is recursive data structure, so as tree and any graph for that
> matter... are you sure you are arguing right thing?
 
In the sorts of languages we're talking about, you would probably use
iteration for lists and recursion for anything that looks like a tree
because those would be the most practical choices.
 
But if you are going to add lists to the set of recursive data
structures, then that would just increase the number of programs that
use mutual recursion.
Melzzzzz <Melzzzzz@zzzzz.com>: Jan 31 10:33PM


> But if you are going to add lists to the set of recursive data
> structures, then that would just increase the number of programs that
> use mutual recursion.
 
Yeah list is first class citizen in recursive languages.
 
--
press any key to continue or any other to quit...
"Öö Tiib" <ootiib@hot.ee>: Jan 31 01:34AM -0800

On Wednesday, 30 January 2019 19:52:01 UTC+2, Daniel wrote:
> pedantry and usefulness, yet those decisions make the difference between a
> product that is accepted, and one that is not. Your comments appreciated, as
> always.
 
I forgot to tell that if you want to support wide variety of texts
in interface like ...:

auto s = j.as<std::basic_string<AnyChar,AnyTraits,AnyAllocator>>();
 
... then that might be is too ambitiously configurable in call site. What
it has to do may be very different based on what is the asked as return
type and how type of j was configured, what is platform and
compiler defines. So interface like ...
 
auto a = j.to_utf8();
auto b = j.to_ucs4();
auto c = j.to_nsstring();
auto d = j.to_qstring();
auto e = j.to_bstr();
 
... can be extended piece by piece and on need or feature request
basis and can even refuse to compile on each platform or with every
set of switches. Say NSString makes sense only on OSX/iOS, _bstr_t
only in Windows and QString only when Qt is used.
That might simplify the issue for all of writers, testers, profilers and
callers of those and that will help to concentrate on those interested
and provide value faster to them.
Daniel <danielaparker@gmail.com>: Jan 31 07:06AM -0800

On Thursday, January 31, 2019 at 4:34:14 AM UTC-5, Öö Tiib wrote:
 
> ... can be extended piece by piece and on need or feature request
> basis
 
Good advise, generally.
 
Anyway, I've decided not to continue with experimental version (**). I've concluded that the experimental version would benefit no users, would annoy some, and would introduce the possibility of encoding exceptions everywhere instead of just at the endpoints.
 
Daniel
Vir Campestris <vir.campestris@invalid.invalid>: Jan 31 09:46PM

On 28/01/2019 15:24, Scott Lurndal wrote:
> of the hardware and test software. No spinning rust can provide those datarates; you have
> to go to high end NVMe SSD setups. Do include measurements of the system overhead
> when compared with using lower-overhead I/O mechanisms like mmap or O_DIRECT.
 
Scott if his speeds are unbelieveably fast for his hardware - maybe he
is running out of cache. But so what? It means he isn't bound by the
hardware, and they are reflecting his I/O overheads.
 
Andy
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jan 31 02:07AM +0100

On 30.01.2019 17:59, Richard wrote:
>> through the odd spammer's comment, but not so much that it's a problem.
 
> I've had very good results rejecting spam through wordpress.com, so it
> is possible to configure wordpress to be very good at rejecting spam.
 
Oh. I didn't even know one could do that.
 
Re the code I posted it was ugly, just the first the worked,
unreasonably half-way between two reasonable solutions. For consecutive
enum values it would be simpler and more efficient to generate a
`switch`. And for the case of wanting two-way conversion or ability to
iterate over value/string pairs, the collection of value/string pairs
should be provided on its own and not just a local static in a function.
 
The OP has still not turned on commentary, but has posted summaries and
one full quote of commentary he's received on Reddit. That approach did
/not/ work out well. I mentioned, in the posting up-thread, that lack of
commentary function on the blog could lead to
 
> incorrect stuff will not be corrected, improvements and alternatives
> will not be available
 
and just that has happened. :(
 
In particular, the January 27 posting "shrink_to_fit that doesn't suck"
still provides only solutions (to a non-existing problem) that do suck,
and the January 30 posting "What happens when we "new"" incorrectly and
misleadingly states that when one writes
 
T* t1 = new T;
 
the compiler allegedly translates that to (really, it does NOT force use
of the global allocation function)
 
T* t2 = (T*)::operator new(sizeof(T));
try
{
new (t2) T;
}
catch(...)
{
::operator delete(t2);
throw;
}
 
The incorrect belief in that correspondence was once the basis of an
infamous bug in Microsoft's MFC framework, where one could get a memory
leak that manifested only in debug builds.
 
So, it seems we have a new C++ blog that provides some misleading
dis-information, sort of like Herb Schildt, with commentary turned off. :(
 
 
Cheers!,
 
- Alf
mvorbrodt@gmail.com: Jan 30 06:00PM -0800

On Wednesday, January 30, 2019 at 8:08:11 PM UTC-5, Alf P. Steinbach wrote:
> dis-information, sort of like Herb Schildt, with commentary turned off. :(
 
> Cheers!,
 
> - Alf
 
the comments are open on the blog. please post corrections. if I put misleading information I want to learn and correct it.
mvorbrodt@gmail.com: Jan 30 06:01PM -0800

On Tuesday, January 29, 2019 at 10:23:53 PM UTC-5, Alf P. Steinbach wrote:
> --------------------------------------------------------------------------
 
> Cheers!,
 
> - Alf
 
the comments are open on the blog. please post corrections. if I put misleading information I want to learn and correct it.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.programming.threads@googlegroups.com - 4 updates in 3 topics

Elephant Man <conanospamic@gmail.com>: Jan 31 08:15AM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 31 08:15AM

Article d'annulation émis par un modérateur JNTP via Nemo.
Horizon68 <horizon@horizon.com>: Jan 30 12:44PM -0800

Hello,,
 
 
Read gain, i correct some last typos:
 
About some of my scalable algorithms..
 
As you have noticed, i am a white arab who has invented many scalable
algorithms and there implementations, and two of my interesting scalable
algorithms are the following:
 
LW_Asym_RWLockX that is a lightweight scalable Asymmetric Reader-Writer
Mutex that uses a technic that looks like Seqlock without looping on the
reader side like Seqlock, and this has permited the reader side to be
costless, it is FIFO fair on the writer side and FIFO fair on the reader
side and it is of course Starvation-free and it does spin-wait, and also
Asym_RWLockX a lightweight scalable Asymmetric Reader-Writer Mutex that
uses a technic that looks like Seqlock without looping on the reader
side like Seqlock, and this has permited the reader side to be costless,
it is FIFO fair on the writer side and FIFO fair on the reader side and
it is of course Starvation-free and it does not spin-wait, but waits on
my SemaMonitor, so it is energy efficient.
 
And i am using the Windows FlushProcessWriteBuffers for those two
atomic-free "highly asymmetric synchronizations", this greatly increase
read-side speed and scalability.
 
You can download the C++ and Delphi and FreePascal implementations of my
scalable algorithms above from:
 
The C++ implementation is inside my C++ synchronization objects library
here:
 
https://sites.google.com/site/scalable68/c-synchronization-objects-library
 
And the Delphi and FreePascal implementations are here:
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
 
Here is also why my scalable algorithms above are useful:
 
Based on Intel and Micron's claim, 3D Xpoint is 1000x faster than NAND
and 10x higher density than conventional memory (assume DRAM here). So
latency of PCIe NAND is about 100us, and 1000x faster 3D Xpoint gives
100ns, which is 2 times slower than DRAM's speed of 50ns, so this makes
my scalable RWLocks very useful for 3D Xpoint, so my scalable RWLocks
are for example very useful for Optane SSD 900P that uses 3D Xpoint and
thus they are very useful for such SSDs that use 3D XPoint and that are
used in a "scalable" RAID manner.
 
Read about Intel Optane SSD 900P Review: 3D XPoint Unleashed
 
https://www.tomshardware.co.uk/intel-optane-ssd-900p-3d-xpoint,review-34076.html
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jan 30 12:35PM -0800

Hello...
 
Read this:
 
 
About some of my scalable algorithms..
 
As you have noticed, i am a white arab who has invented many scalable
algorithms and there implementations, and two of my interesting scalable
algorithms are the following:
 
LW_Asym_RWLockX that is a lightweight scalable Asymmetric Reader-Writer
Mutex that uses a technic that looks like Seqlock without looping on the
reader side like Seqlock, and this has permited the reader side to be
costless, it is FIFO fair on the writer side and FIFO fair on the reader
side and it is of course Starvation-free and it does spin-wait, and also
Asym_RWLockX a lightweight scalable Asymmetric Reader-Writer Mutex that
uses a technic that looks like Seqlock without looping on the reader
side like Seqlock, and this has permited the reader side to be costless,
it is FIFO fair on the writer side and FIFO fair on the reader side and
it is of course Starvation-free and it does not spin-wait, but waits on
my SemaMonitor, so it is energy efficient.
 
And i am using the Windows FlushProcessWriteBuffers for those two
atomic-free "highly asymmetric synchronizations", this greatly increase
read-side speed and scalability.
 
You can download the C++ and Delphi and FreePascal implementations of my
scalable algorithms above from:
 
The C++ implementation is inside my C++ synchronization objects library
here:
 
And the Delphi and FreePascal implementation is here:
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
 
Here is also why my scalable algorithms above are useful:
 
Based on Intel and Micron's claim, 3D Xpoint is 1000x faster than NAND
and 10x higher density than conventional memory (assume DRAM here). So
latency of PCIe NAND is about 100us, and 1000x faster 3D Xpoint gives
100ns, which is 2 times slower than DRAM's speed of 50ns, so this makes
my scalable RWLocks very useful for 3D Xpoint, so my scalable RWLocks
are for example very useful for Optane SSD 900P that uses 3D Xpoint and
thus they are very useful for such SSDs that use 3D XPoint and that are
used in a "scalable" RAID manner.
 
Read about Intel Optane SSD 900P Review: 3D XPoint Unleashed
 
https://www.tomshardware.co.uk/intel-optane-ssd-900p-3d-xpoint,review-34076.html
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Wednesday, January 30, 2019

Digest for comp.lang.c++@googlegroups.com - 8 updates in 2 topics

David Brown <david.brown@hesbynett.no>: Jan 30 09:58PM +0100

On 30/01/2019 20:39, Melzzzzz wrote:
>> to use a single new C++ feature.
 
> Build tools have to change for sure. Also having modules unecessites
> having .h files.
 
I can't see why common build tools like "make" would have any problems here.
 
Sure, you might not be able to compile everything in parallel - but
"make" (or ninja, or whatever) will handle it fine. If you have enough
modules for the build speed and parallel compilation to matter, there
will be plenty of opportunity for parallelising.
David Brown <david.brown@hesbynett.no>: Jan 30 10:00PM +0100

On 30/01/2019 18:45, Bart wrote:
> not having a module system, and having to construct the headers - what
> are effectively the export files of the proposed scheme (I don't know
> how it works either) - by hand.
 
It is almost always possible to break your circular dependencies by
using a third module that contains the parts needed by both A and B. If
C++ modules can be made in a way that allows circular dependencies,
great. If not, it's certainly not the end of the module idea.
scott@slp53.sl.home (Scott Lurndal): Jan 30 09:33PM

>using a third module that contains the parts needed by both A and B. If
>C++ modules can be made in a way that allows circular dependencies,
>great. If not, it's certainly not the end of the module idea.
 
Back in the late 70's/early 80's, the Burroughs MCP/IX operating system
had reached a point where it needed to be re-written (being fifteen
years old at that point, and hardware had gotten much more capable in
the intervening years). The MCP at that point was written in
assembler language, some of it over fifteen years old; to write a new,
modern operating system capable of managing the greatly expanded hardware
resources (amount of memory, new MMU, number of processors, I/O controllers, devices)
in assembler would require a great deal of labor, be prone to error and
difficult to maintain.
 
So, a new language was proposed by the Languages department to use for the
Operating system. This language, based very roughly on Modula, supported
independently compiled modules using a centralized interface definition (called
a MID - Module Interface Definition). Sort of a master include file, if you
will. The MID needed to be compiled _before_ any module that used it; much
like the proposed C++ Modules capability. The new language was called
SPRITE and the new operating system (MCP/VS2.0) was written -mostly- in that
language (with portions of the prior MCP living on as assembler files linked
in with the SPRITE ICMs (Independently Compiled Modules).
 
In practice, while the concept worked, the MID quickly became a bottleneck
for the developers - every interface change between modules (and early in
the development process, this occured very frequently) needed a MID change.
 
Now, source code control systems at the time were uncommmon (SCCS became
available for Unix systems about then, RCS came a bit later on BSD systems);
the developers used a Master + patch approach; However, most changes required
patching the MID as well which required careful coordination when the patches
were "checked in" (eventually consolidated into the single master file for
each module) because at the time (the end of the punched card era), each source
code line required a unique sequence number (typically in columns 72-80, but
dependent upon the language); making sure the patches being merged didn't
overlap was a time-consuming process (later, partially automated as part of
the resequencer that needed to be run periodically to expand the gaps between
sequence numbers for future changes).
 
That experience soured me somewhat on the idea of the type of interface
definition proposed by the C++ Modules proposal.
 
As one who routinely builds a 1.5+ million line C++ application (most of that
in header files), I don't see compile time as a problem (full builds take
a few minutes, typical incremental builds during development take a few seconds);
nor do I believe that preventing ODR violations is valuable enough to
justify adding Modules TS or P0947R0(Atom) to C++.
 
YMMV.
 
 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0986r0.html
Melzzzzz <Melzzzzz@zzzzz.com>: Jan 30 10:18PM

> compilers from accepting code like this (albeit with warnings):
 
> void f(){g("two"); g(&f);}
> void g(float x){};
 
No. You don't have type check. I meant no need for prototypes.
 
 
> The problem I mean is illustrated here, with a fantasy version of C that
> has modules:
This problem is solved in other languages. You either parse all library
files as single file, or ....
 
--
press any key to continue or any other to quit...
Melzzzzz <Melzzzzz@zzzzz.com>: Jan 30 10:19PM

> "make" (or ninja, or whatever) will handle it fine. If you have enough
> modules for the build speed and parallel compilation to matter, there
> will be plenty of opportunity for parallelising.
 
No problems with make. Compilers must accomodate.
 
--
press any key to continue or any other to quit...
David Brown <david.brown@hesbynett.no>: Jan 31 12:04AM +0100

On 30/01/2019 23:19, Melzzzzz wrote:
>> modules for the build speed and parallel compilation to matter, there
>> will be plenty of opportunity for parallelising.
 
> No problems with make. Compilers must accomodate.
 
Ah, when you said "Build tools have to change for sure", I thought you
meant tools like "make". Obviously compilers have to change to support
the new feature!
Daniel <danielaparker@gmail.com>: Jan 30 12:50PM -0800

On Wednesday, January 30, 2019 at 2:48:16 PM UTC-5, Mr Flibble wrote:
 
> I see you made basic_json template in November last year; you been copying
> my JSON library that also does that?
 
jsoncons has had basic_json template since Feb 2014 :-)
 
I think you're referring to the file named basic_json.hpp, earlier it was in json.hpp.
 
Best regards,
Daniel
Daniel <danielaparker@gmail.com>: Jan 30 01:11PM -0800

On Wednesday, January 30, 2019 at 11:57:58 AM UTC-5, Richard wrote:
 
> >or
 
> >json j = parse<json>(source);
 
> Not this.
 
Thanks for the feedback, it's appreciated.
 
Daniel
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Tuesday, January 29, 2019

Digest for comp.programming.threads@googlegroups.com - 6 updates in 4 topics

Elephant Man <conanospamic@gmail.com>: Jan 28 08:19PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 28 08:19PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Elephant Man <conanospamic@gmail.com>: Jan 28 08:19PM

Article d'annulation émis par un modérateur JNTP via Nemo.
Horizon68 <horizon@horizon.com>: Jan 28 10:33AM -0800

Hello..
 
 
About Energy efficiency..
 
Energy efficiency isn't just a hardware problem. Your programming
language choices can have serious effects on the efficiency of your
energy consumption. We dive deep into what makes a programming language
energy efficient.
 
As the researchers discovered, the CPU-based energy consumption always
represents the majority of the energy consumed.
 
What Pereira et. al. found wasn't entirely surprising: speed does not
always equate energy efficiency. Compiled languages like C, C++, Rust,
and Ada ranked as some of the most energy efficient languages out there,
and Java and FreePascal are also good at Energy efficiency.
 
Read more here:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
RAM is still expensive and slow, relative to CPUs
 
And "memory" usage efficiency is important for mobile devices.
 
So Delphi and FreePascal compilers are also still "useful" for mobile
devices, because Delphi and FreePascal are good if you are considering
time and memory or energy and memory, and the following pascal benchmark
was done with FreePascal, and the benchmark shows that C, Go and Pascal
do rather better if you're considering languages based on time and
memory or energy and memory.
 
Read again here to notice it:
 
https://jaxenter.com/energy-efficient-programming-languages-137264.html
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jan 28 10:23AM -0800

Hello,
 
 
About implicit type conversions..
 
The more implicit type conversions a language supports the weaker its
type system is said to be. C++ supports more implicit conversions than
Ada or Delphi. Implicit conversions allow the compiler to silently
change types, especially in parameters to yet another function call -
for example automatically converting an int into some other object type.
If you accidentally pass an int into that parameter the compiler will
"helpfully" silently create the temporary for you, leaving you perplexed
when things don't work right. Sure we can all say "oh, I'll never make
that mistake", but it only takes one time debugging for hours before one
starts thinking maybe having the compiler tell you about those
conversions is a good idea.
 
 
Thank you,
Amine Mopulay Ramdane.
Horizon68 <horizon@horizon.com>: Jan 28 09:34AM -0800

Hello,
 
 
Integer Computation Efficiency Comparisons Between Modern Compilers –
Case Study – PI Computation (Delphi, Java, C, C++)
 
Read more here:
 
https://helloacm.com/integer-computation-efficiency-comparisons-between-modern-compilers-case-study-pi-computation-delphi-java-c-c/
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

Monday, January 28, 2019

Digest for comp.lang.c++@googlegroups.com - 6 updates in 4 topics

JiiPee <no@notvalid.com>: Jan 28 11:23PM

On 28/01/2019 10:29, Öö Tiib wrote:
 
> }
 
> Personally to me such whole logic feels messed up, regardless if it is
> enum or struct.
 
 
Thanks, looks promising, I ll check that.
 
How elso it could be solved? If you have a parent class and you want to
store some kind of name there, how would you store? So that the name can
be used via inherited class objects.
Daniel <danielaparker@gmail.com>: Jan 28 02:28PM -0800

On Monday, January 28, 2019 at 4:27:50 PM UTC-5, Mr Flibble >
 
> libraries as possible. I have removed most of the wchar_t from "neolib"
> and only have one instance of wchar_t in "neoGFX" (which is much larger
> than "neolib").
 
I note that CharT is a template parameter in your
basic_json_value, hopefully everything will work as expected if
someone plugs in a wchar_t, because char and wchar_t are probably
the only things that anybody will plug into it.
 
I believe RapidJson supports templated encoding, jsoncons has a
CharT parameter, but I don't think any of the other libraries do.
nhlomann doesn't, and its absence certainly hasn't hurt it's
popularity. In jsoncons, the assumption is that char is UTF8,
char16_t is UTF16, char32_t is UTF32, and wchar_t is UTF16 if
sizeof(wchar_t) is 2 chars, and UTF32 if sizeof(wchar_t) is 4
chars. Unicode validation is performed based on those
assumptions. In any case, I wouldn't expect anyone to specialize
with anything other than char and wchar_t.
 
In retrospect, I think that having a CharT template parameter in
jsoncons may have been a mistake, and an approach more similar to
std::filesystem might have been better. It would, after all, be
relatively straightforward to hold text in the variant as UTF8,
and have templated accessor as<String> to return whatever
encoding the user wants as deduced from String::char_type,
performing the conversion from UTF8 on demand. Similarly with
stream operators.
 
Daniel
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 11:00PM

On 28/01/2019 22:28, Daniel wrote:
> char16_t is UTF16, char32_t is UTF32, and wchar_t is UTF16 if
> sizeof(wchar_t) is 2 chars, and UTF32 if sizeof(wchar_t) is 4
> chars.
 
Is that UTF16BE or UTF16LE? In my JSON lib the whole point of the CharT
template parameter is that is matches the document text format and that
allows string objects to be created without performing an allocation (they
just point to the document text).
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Vir Campestris <vir.campestris@invalid.invalid>: Jan 28 09:52PM

On 27/01/2019 20:54, Alf P. Steinbach wrote:
> conventions that they use in a social way: those who don't use that
> solution are not in the in-group and can be criticized and ridiculed.
> Ha, you're not using "always auto"! So it's your own fault mate!
 
I'm "almost never auto".
 
I don't see how code clarity can be improved by hiding the type of a
variable.
 
Andy
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 09:58PM

On 28/01/2019 21:52, Vir Campestris wrote:
 
> I'm "almost never auto".
 
> I don't see how code clarity can be improved by hiding the type of a
> variable.
 
Because most of the time you don't care what the type is so the type just
adds noise: auto is very easy on the eyes.
 
I blogged about this subject in 2014:
https://leighjohnston.wordpress.com/2014/10/13/new-c-auto-keyword-condsidered-awesome-not-harmful/
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Daniel <danielaparker@gmail.com>: Jan 28 01:50PM -0800

On Monday, January 28, 2019 at 4:22:18 PM UTC-5, Paavo Helde wrote:
 
 
> These tests are single-threaded so the global locale dependencies
> probably do not play a great role; things might be different in a
> heavily multithreaded regime.
 
Appreciate you posting numbers, when I have time, I'll see if
I can replicate. Just to note, my 40x number was comparison to
David Gay's netlib code, which is much faster than the C
functions. Those are old benchmarks, too, probably using vs2010.
 
Do you happen to have an implementation of the C++2017
from_chars/to_chars for floating point? I would be very
interested in how those compare. I believe vs2017 versions still
only support integer.
 
I understand that compiler vendors would be very cautious
and conservative in introducing any changes to floating point
algorithms, as "improvements" could have huge impacts on real
world validation suites. But hopefully new functions with new
specifications will allow incorporation of new research.
 
Daniel
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Daniel <danielaparker@gmail.com>: Jan 28 10:05AM -0800

Suppose we have some classes,
 
struct A
{
typedef char char_type;
};
 
struct B
{
typedef wchar_t char_type;
};
 
and some functions that take instances of these classes along with a
string. Would you be inclined to write those functions like this:
 
// (*)
template <class T>
void f(const T& t, const std::basic_string<typename T::char_type>& s)
{}
 
or like this
 
// (**)
template <class T, class CharT>
typename std::enable_if<std::is_same<typename T::char_type,CharT>::value,void>::type
g(const T& t, const std::basic_string<CharT>& s)
{}
 
or some other way? Criteria are technical reasons and ease of documentation.
 
(In this construction, it is ruled out that A and B can be
specializations of a common class templated on CharT.)
 
Thanks,
Daniel
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 06:13PM

On 28/01/2019 18:05, Daniel wrote:
> {
> typedef wchar_t char_type;
> };
 
Don't use wchar_t as it isn't portable so is next to useless. We have
Unicode now.
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Daniel <danielaparker@gmail.com>: Jan 28 11:03AM -0800

On Monday, January 28, 2019 at 1:14:08 PM UTC-5, Mr Flibble wrote:
> > };
 
> Don't use wchar_t as it isn't portable so is next to useless. We have
> Unicode now.
 
In practice, char everywhere and 16 bit wchar_t on Windows only
are the only character types that currently matter for a library
writer that aspires to having users. You may regret that wchar_t
exists, but there is a substantial amount of code that has it. In
any case, feel free to substitute char16_t or char32_t, or
another type altogether, the question is the same.
 
Daniel
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 09:27PM

On 28/01/2019 19:03, Daniel wrote:
> exists, but there is a substantial amount of code that has it. In
> any case, feel free to substitute char16_t or char32_t, or
> another type altogether, the question is the same.
 
In practice, if you know what you are doing, a library writer that aspires
to having users will have as little OS specific (Windows) code in their
libraries as possible. I have removed most of the wchar_t from "neolib"
and only have one instance of wchar_t in "neoGFX" (which is much larger
than "neolib").
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Manfred <invalid@add.invalid>: Jan 28 12:34AM +0100

On 1/27/19 10:33 PM, Daniel wrote:
> In the meantime I think it's fair to say that authors of accounting and portfolio management software have by and large abandoned C++ as their language of choice.
 
Bjarne Stroustrup is a Managing Director in the technology division of
Morgan Stanley.
I am pretty sure they use a lot of C++ in there.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jan 28 12:19AM

On 27/01/2019 21:33, Daniel wrote:
> In the meantime I think it's fair to say that authors of accounting and portfolio management software have by and large abandoned C++ as their language of choice.
 
Again with the negative vibes. Where do you get this anti-C++ stuff from?
Are you just making it up as you go along? Backup your assertions or they
can be dismissed with no further thought.
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Daniel <danielaparker@gmail.com>: Jan 27 05:28PM -0800

On Sunday, January 27, 2019 at 6:34:55 PM UTC-5, Manfred wrote:
 
> Bjarne Stroustrup is a Managing Director in the technology division of
> Morgan Stanley.
> I am pretty sure they use a lot of C++ in there.
 
Retail or wholesale banking? I doubt if they use C++ in retail for
accounting or client account management in retail, which is the subject of
my last post. Retail is less complicated but bigger than wholesale. In
wholesale, I doubt if they use C++ for account management either.
 
In wholesale, in quantitative modelling and risk, there are many valuation
models which are analogous to the heat equation in physics, and monte
carlo simulations of risk factors that are heavily compute intensive, and
which rely on massively parallel grid calculation. These do not use fixed
decimal arithmetic, as pennies are irrelevant for these purposes, and
happily get by with doubles. The major UK based commercial product in this
space is written in C#, but does use some C++ for simulation. Internal
development in this space tends to be a mix of C# and C++.
 
Daniel
Robert Wessel <robertwessel2@yahoo.com>: Jan 28 12:18AM -0600

On Sun, 27 Jan 2019 21:48:44 +0100, "Alf P. Steinbach"
>to represent the foreign debt of the USA to the penny. It's right at the
>limit today. Next year, or in a few years, it will not suffice.
 
>Is this like a year-2000 problem or the C time problem coming up on us?
 
 
As a first order approximation, no one should be using float for
currency.
 
As for databases, any serious relational database supports "DECIMAL"
datatypes (the internal format is not relevant, and is often binary,
but the precision specification of is of the number of decimal digits,
and a decimal scaling factor).
 
https://dev.mysql.com/doc/refman/8.0/en/fixed-point-types.html
 
Languages like Java and C# have support for similar datatypes. And
that's one reason they see heavy business use - they actually can
reasonably accommodate currency, without the programmer jumping
through hoops.
Daniel <danielaparker@gmail.com>: Jan 28 06:16AM -0800

On Sunday, January 27, 2019 at 12:27:10 PM UTC-5, Manfred wrote:
> dominant: that's where they make money. So if the market wants a Decimal
> type, they'll provide such a type with /some/ internal representation,
> it doesn't matter /which/ specific representation.
 
DBMS vendors wouldn't make any money at all if they expected users to
user their vendor specific APIs, let alone vendor specific types.
That's why when new DBMS products comes on the market, they generally
implement C drivers to standard or de facto standard API's, such as JDBC or
ODBC, so users can use these databases through broadly supported interfaces.
In Java and C#, these interfaces are quite pleasant to use, and there is an
enormous amount of tooling available around them, both open source and
commercial. In C++, they are not.
 
In C++, there are open source API's over the de facto standard ODBC,
such as nanodbc https://github.com/nanodbc/nanodbc/issues but they
are not as pleasant to use. Partly because C++ doesn't have big integer,
big decimal, or date time. (Also because C++ doesn't have introspection,
but that's a separate matter.)
 
There are some very specialized and very expensive DBMS products where
people might build with their libraries and use their own API's, but
I believe that's quite rare. When I've used Netezza in C# or C++, for
example, I've used ODBC bindings, and I think most people do.
> representation of a native type: such representation is key to
> performance and efficiency, which is the drive of the language, unlike
> for commercial vendors.
 
Of course, a language that has C++ streams cannot be said to all about
"performance and efficiency". It is remarkable that C++ introduced an
abstraction that appears to be so inherently slow, especially for
text/number conversions. If a language does not have good abstraction that
map into user space, the things that users build will be neither performant
nor efficient.
 
> hardware level, and anyway there is no common such representation among
> architectures, the language does not provide such a type as a native
> type.
 
No architectures provide anything that maps directly into C++ streams or
filesystem directory_entry either. There is conversion and abstraction into
a form that is more convenient for the user.
 
> In this context, choosing one representation would possibly be
> efficient for one architecture, but inefficient for another one or for a
> future one.
 
If a user needs, for example, to convert a floating point number into
a text representation with fewest decimal digits after the dor, that's not
handled in hardware either, but C++ is going to do that (with to_chars).
C++ already does a lot of things that take you away from the hardware.
I don't think big integer, big decimal and date time are any different.
 
Daniel
scott@slp53.sl.home (Scott Lurndal): Jan 28 02:26PM

>> at the application or library level.
 
 
>It raises the question of what representation common database APIs in
>the financial world use.
 
They don't use binary floating point, as a rule. On the Z-series,
it's BCD (packed decimal).
scott@slp53.sl.home (Scott Lurndal): Jan 28 02:29PM

>accounting or client account management in retail, which is the subject of
>my last post. Retail is less complicated but bigger than wholesale. In
>wholesale, I doubt if they use C++ for account management either.
 
Citi, Morgan, Credit Suiesse, etc. use C++ for BS modelling amongst other things.
Very little effort is needed for simple account management, and many still
use COBOL on Z-series for that. The bulk of software development in the big
houses is now aimed at internet, apps and modelling.
 
https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model
"Öö Tiib" <ootiib@hot.ee>: Jan 28 07:08AM -0800

On Monday, 28 January 2019 16:16:45 UTC+2, Daniel wrote:
> text/number conversions. If a language does not have good abstraction that
> map into user space, the things that users build will be neither performant
> nor efficient.
 
Can you illustrate that claim of slowness with code? Perhaps you do
something odd in it.
 
C++ streams usually read and write how fast hardware lets these to read
and to write. For example on this macbook I happen to have under hand
it is about 3 GB/s read and 2 GB/s write. Only thing that seems
crappy is that std::filesystem is not supported on it. But Apple has
always been unfriendly to developers so no biggie.
scott@slp53.sl.home (Scott Lurndal): Jan 28 03:24PM


>Can you illustrate that claim of slowness with code? Perhaps you do
>something odd in it.
 
>C++ streams usually read and write how fast hardware lets these to read
 
That may, perhaps, under certain conditions, be true. However, the
overhead required by C++ streams (and/or stdio) take cycles away from
other jobs on the system.
 
>and to write. For example on this macbook I happen to have under hand
>it is about 3 GB/s read and 2 GB/s write.
 
Provide the methodology used for these tests, including a description
of the hardware and test software. No spinning rust can provide those datarates; you have
to go to high end NVMe SSD setups. Do include measurements of the system overhead
when compared with using lower-overhead I/O mechanisms like mmap or O_DIRECT.
 
C++ streams, aside from the sillyness of the syntax, has significant run-time
overhead (especially when sitting on top of stdio streams) (and leaving
aside the kernel buffers into which the data is first read).
Daniel <danielaparker@gmail.com>: Jan 28 07:28AM -0800

On Monday, January 28, 2019 at 9:29:49 AM UTC-5, Scott Lurndal wrote:
 
> Citi, Morgan, Credit Suiesse, etc. use C++ for BS modelling amongst other
> things.
 
I don't think they use Black-Scholes for valuing trades anymore. The closed
form Black-Scholes model is simple and inexpensive to calculate, but doesn't
adapt to changing volatility surfaces. Most quant desks use iterative finite
difference methods that model price movements and volatility movements
simultaneously, or monte carlo methods. All of it is somewhat problematic,
as it reduces to asset prices and volatility assumed to be driven by
parameterized stochastic processes. In physics, the parameters of a model
can be computed once and for all, while in finance they're recalibrated
daily, and tend to jump around. The models tend to work quite well in the
good times, basically relying on arbitrage arguments in the presence of
perfect liquidity, and fall apart in the bad, when there is no liquidity, as
seen in 2006.
 
Black-Scholes is still used in risk, though, especially for monte carlo
methods, and credit risk, where individual trades may have to be valued
10,000 times for each of say 250 time buckets. Less so in historical VaR,
though.
 
While desk quants traditionally developed in C++, much of it is now written
in C#. The vendor products tend to be written in C# or JAVA, with some C++
in the most compute intensive components.
 
> Very little effort is needed for simple account management, and many still
> use COBOL on Z-series for that. The bulk of software development in the > big houses is now aimed at internet, apps and modelling.
 
There's a lot of effort in deep learning and deep neural nets, some of it
targeting fraud, and some of it the chat person that you meet when you log
onto a retail account. In the fraud area, much of the effort uses Python,
which affords a rich interface to deep neural net components. Here the
effort is more on accessing data and training the net than actual
programming.
 
And yes, internet and apps. The banks are afraid of future competition from
the technology companies, such as we see from Alibaba in China, and are
investing heavily to try to ward that off.
 
Daniel
Daniel <danielaparker@gmail.com>: Jan 28 08:36AM -0800

On Monday, January 28, 2019 at 10:08:26 AM UTC-5, Öö Tiib wrote:
> it is about 3 GB/s read and 2 GB/s write. Only thing that seems
> crappy is that std::filesystem is not supported on it. But Apple has
> always been unfriendly to developers so no biggie.
 
Reading and writing chunks of text from a C++ stream is quite fast, and
comparable to fread and fwrite. But
 
double x;
uint64_t m;
is >> x;
is >> m;
 
or
 
double y;
uint64_t n;
os << y;
os << n;
 
is inordinately slow. That's why in the category of JSON parsers, for
example, none of the reasonably performing ones use streams for that
purpose, even though streams are more convenient because they can be
initialized with the 'C' locale. Many use the C functions sprintf for output
(jsoncpp) and sscanf or strtold for input (jsoncpp, nholmann, jsoncons), and
go through the extra step of undoing the effect of whatever locale is in
effect (unfortunately the _l versions of the C functions are not available
on all platforms.) But the C functions, even with the format parsing and
backing out the locale, are measurably much faster than the stream
alternatives.
 
nhlomann recently dropped sprintf for fp output (for platforms that support
ieee fp) and changed to a slightly modified version of Florian Loitsch's
implementation of girsu2 https://florian.loitsch.com/publications. This
improved their performance by about 2 1/2 times over what they were doing
previously with sprintf, by my benchmarks. Other JSON libraries have also
moved to girsu3, for platforms that support IEEE fp, with a backup
implementation using sprintf and strtod in the rare cases that girsu3
rejects.
 
Note that the girsu2 and girsu3 have an additional advantage over the stream
and C functions, in that they guarantee shortest decimal digits after the
dot almost always (girsu2) or always unless rejected (girsu3). Careful JSON
libraries (such as cJSON) convert doubles to string with 15 digits of
precision, than convert back to text, check if that round trips, and if not
convert to string with 17 digits of precision.
 
JSON libraries that are not cognisant of these things tend to end up with
unhappy users that express their unhappiness in the issues logs.
 
Everybody is hopeful for to_chars, but it's not widely available yet.
 
Daniel
Daniel <danielaparker@gmail.com>: Jan 28 08:41AM -0800

On Monday, January 28, 2019 at 11:36:58 AM UTC-5, Daniel wrote:
 
> Careful JSON libraries (such as cJSON) convert doubles to string with 15
> digits of precision, than convert back to text,
 
Errata: convert back to double
 
"Öö Tiib" <ootiib@hot.ee>: Jan 28 10:00AM -0800

On Monday, 28 January 2019 18:36:58 UTC+2, Daniel wrote:
> uint64_t n;
> os << y;
> os << n;
 
Oh that. That is misuse. It was perhaps meant for human interface.
One side serializes with German locale and other side blows up.
 
> on all platforms.) But the C functions, even with the format parsing and
> backing out the locale, are measurably much faster than the stream
> alternatives.
 
Typical "good case" oriented thinking. AFAIK the "number" in JSON is full
precision and does not have INFs or NaNs. So the stuff overlaps only
partially and so the standard library tools can't handle it anyway.

 
> JSON libraries that are not cognisant of these things tend to end up with
> unhappy users that express their unhappiness in the issues logs.
 
> Everybody is hopeful for to_chars, but it's not widely available yet.
 
Perhaps things like GMP or MPIR can be used with JSON. The doubles
and ints will eventually cause the JSON-discussion with some Python
script to break. Hopefully nothing does meltdown then as result.
Paavo Helde <myfirstname@osa.pri.ee>: Jan 28 08:50PM +0200

On 28.01.2019 18:36, Daniel wrote:
> go through the extra step of undoing the effect of whatever locale is in
> effect (unfortunately the _l versions of the C functions are not available
> on all platforms.)
 
When comparing with the C *_l versions C++ streams need to be imbued
with an explicit locale, otherwise it's not a fair comparison. I suspect
they would still be slower, but maybe not so much.
Daniel <danielaparker@gmail.com>: Jan 28 10:54AM -0800

On Monday, January 28, 2019 at 1:01:01 PM UTC-5, Öö Tiib wrote:
 
> Oh that. That is misuse. It was perhaps meant for human
> interface.
> One side serializes with German locale and other side blows up.
 
stringstream provides exactly the same capabilities as the C
functions, including precision and formatting, except it's easier
to take control of locale. If you go through the issues logs
of the libraries that use the "C" functions, you'll find
most them have had issues with the German/Spanish/whatever
locale, and had to fix it. Sometimes they've regressed. It just
that the streams are so slow. I've benchmarked "os << d" against
a C implementation by David Gay on netlib which is accepted as
being correct, and the stream implementation was 40 times slower.
I tried implementing my own versions of istringstream and
ostringstream, with a buffer that could be reused, and less
conditionals, but it only increased performance by a factor of 2,
so not worth it.
 
 
> Typical "good case" oriented thinking. AFAIK the "number" in JSON is full
> precision
 
The JSON specification is ambiguous about that, but quality of
implementation considerations suggest it should be full
precision, but it should also be a number with the least decimal
digits after the dot that represent the number. Nobody wants
to see 17 digits after the decimal point when a shorter version
represents the same number. Using streams or the C functions,
finding the shortest representation requires a round trip test,
starting with less precision and going with full precision when
that fails. Most libraries that use the C functions go with 15
digits of precision, and skip the round trip test, but this will
occasionally fail round trip. cJSON is careful in this respect,
but it comes at a performance cost.
 
The libraries that have gone with girsu2 and girsu3 implementations (RapidJSON, nhlomann, jsoncons), when the floating point architecture is IEEE 754, can achieve what JSON requires, what users want, but at the cost of introducing more non-standard "stuff" into the libraries. How big do you want
your json library to be?
 
> and does not have INFs or NaNs. So the stuff overlaps only
> partially and so the standard library tools can't handle it anyway.
 
All JSON libraries test the double value for nan or inf before
writing it, and most output a JSON null value if it is nan or
inf. Some make it a user option what to output, some users want to substitute a text string "NaN" or "Inf" or "-Inf" when
encoding JSON, and recover the nan or inf when decoding again.
This is irrespective of whether they are using stringstream
or C functions or custom code.
 
Daniel
Daniel <danielaparker@gmail.com>: Jan 28 11:56AM -0800

On Monday, January 28, 2019 at 1:50:37 PM UTC-5, Paavo Helde wrote:
 
> When comparing with the C *_l versions C++ streams need to be imbued
> with an explicit locale, otherwise it's not a fair comparison. I suspect
> they would still be slower, but maybe not so much.
 
When comparing with a standard number format like JSON, streams
need to be embued with 'C' locale. Most JSON libraries that use
the C functions use the regular versions and reverse the affects
of the global locale as a second step. They don't use the C *_l
versions, because they're not standard and unavailable on some
platforms, and any benefits from #ifdef's are too small to be
worth it. But the C functions still handily beat stringstream, or
any custom stream class with a more efficient streambuf.
 
From a users perspective, in many cases the performance of
streams would be good enough, "good enough for government work"
is the expression. But in a language that's supposed to be fast,
it's an embarrassment to the language that it has something this
slow. How is it even possible to design something that slow?
Maybe it's gotten better since I last benchmarked, maybe my
benchmarks on VS and Windows x64 architecture aren't
representative of what could be done with gnu or clang on linux
x64, but it's factual that library writers everywhere have
abandoned streams.
 
Daniel
Paavo Helde <myfirstname@osa.pri.ee>: Jan 28 11:22PM +0200

On 28.01.2019 21:56, Daniel wrote:
> representative of what could be done with gnu or clang on linux
> x64, but it's factual that library writers everywhere have
> abandoned streams.
 
Same here, using sprintf() and friends instead of streams. Though to be
honest, I have never seen 40x speed differences, at most ca 3x IIRC.
 
For curiosity, I just did some benchmarks (Windows x64 VS2017 Release).
With an actual output file fprintf() won over std::ofstream by a factor
of 1.7x:
 
C++ stream: 0.999657 s
fprintf() : 0.595898 s
 
(test code below).
 
With just numeric conversions the difference was a bit larger, 2.1x
 
stringstream: 1.00417 s
sprintf() : 0.481967 s
 
These tests are single-threaded so the global locale dependencies
probably do not play a great role; things might be different in a
heavily multithreaded regime.
 
-----------------------------
 
// first test
#include <iostream>
#include <fstream>
#include <iomanip>
#include <chrono>
#include <stdio.h>
 
const int N = 1000000;
 
void test1(std::ostream& os) {
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
os << f << "\n";
f += 0.5;
}
}
 
void test2(FILE* os) {
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
if (fprintf(os, "%.15g\n", f)<0) {
throw std::runtime_error(strerror(errno));
}
f += 0.5;
}
}
 
int main() {
try {
std::ofstream os1;
os1.exceptions(std::ofstream::failbit | std::ofstream::badbit);
os1.open("C:\\tmp\\tmp1.txt", std::ios_base::binary);
os1 << std::setprecision(15);
 
auto start1 = std::chrono::steady_clock::now();
test1(os1);
auto finish1 = std::chrono::steady_clock::now();
 
FILE* os2 = fopen("C:\\tmp\\tmp2.txt", "wb");
if (!os2) {
throw std::runtime_error(strerror(errno));
}
auto start2 = std::chrono::steady_clock::now();
test2(os2);
auto finish2 = std::chrono::steady_clock::now();
fclose(os2);
 
std::chrono::duration<double> diff1 = finish1-start1;
std::chrono::duration<double> diff2 = finish2-start2;
 
std::cout << "C++ stream: " << diff1.count() << " s\n";
std::cout << "fprintf() : " << diff2.count() << " s\n";
} catch (const std::exception& e) {
std::cerr << "ERROR: " << e.what() << "\n";
}
}
 
------------------------------
// second test
#include <iostream>
#include <sstream>
#include <iomanip>
#include <chrono>
#include <stdio.h>
 
const int N = 1000000;
 
unsigned int test1() {
// x is mostly for keeping the compiler
// from optimizing the whole code away.
unsigned int x = 0;
std::ostringstream os;
os << std::setprecision(15);
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
os << f;
f += 0.5;
x += os.str()[0];
os.str(std::string());
}
return x;
}
 
unsigned int test2() {
unsigned int x = 0;
char buff[32];
double f = 3.14159265358979323846264;
for (int i = 0; i<N; ++i) {
sprintf(buff, "%.15g", f);
f += 0.5;
x += buff[0];
}
return x;
}
 
int main() {
try {
 
auto start1 = std::chrono::steady_clock::now();
auto res1 = test1();
auto finish1 = std::chrono::steady_clock::now();
 
auto start2 = std::chrono::steady_clock::now();
auto res2 = test2();
auto finish2 = std::chrono::steady_clock::now();
 
if (res1!=res2) {
throw std::runtime_error("results differ");
}
 
std::chrono::duration<double> diff1 = finish1-start1;
std::chrono::duration<double> diff2 = finish2-start2;
 
std::cout << "stringstream: " << diff1.count() <<
" s\n";
std::cout << "sprintf() : " << diff2.count() <<
" s\n";
 
} catch (const std::exception& e) {
std::cerr << "ERROR: " << e.what() << "\n";
}
}
Lynn McGuire <lynnmcguire5@gmail.com>: Jan 28 01:44PM -0600

"The State of C++ on Windows"
https://kennykerr.ca/2019/01/25/the-state-of-cpp-on-windows/
 
According to Microsoft.
 
Lynn
JiiPee <no@notvalid.com>: Jan 28 07:09AM

On 27/01/2019 14:09, Mr Flibble wrote:
 
> IDs for objects or object types? Use uint32_t or uint64_t for object
> ID and UUID for object type. Enums are hard coded and brittle.
 
> /Flibble
 
I am talking about naming objects, like: PLAYER, BALL....
"Öö Tiib" <ootiib@hot.ee>: Jan 28 02:29AM -0800

On Saturday, 26 January 2019 16:49:34 UTC+2, JiiPee wrote:
 
> {
 
>     // draw walk up
 
> }
 
If you need to inherit types then use classes (or structs or
class template instantiations) for such types. Unions or enums
can't be inherited from. Your code, translated to usage of struct:
 
#include <iostream>

struct AnimationNameBase
{
enum Value {NoName};
AnimationNameBase(int v) : v_(v) {}
AnimationNameBase& operator=(int v) {v_ = v; return *this;}
bool operator==(int v) const {return v_ == v;}
int v_{NoName};
};
 
struct Animation
{
AnimationNameBase m_name{AnimationNameBase::NoName};
 
void doAnimation(AnimationNameBase name)
{
m_name = name;
}
};
 
 
struct PlayerAnimationNames : AnimationNameBase
{
enum Value {WalkUp = NoName + 1, WalkDown, WalkRight};
};
 
 
int main()
{
Animation playerAnimation;
 
playerAnimation.doAnimation(PlayerAnimationNames::WalkUp);
 
// code.....then in some function:
 
if(playerAnimation.m_name == PlayerAnimationNames::WalkUp)
{
// draw walk up
}
std::cout << "kind of works™\n";
}
 
Personally to me such whole logic feels messed up, regardless if it is
enum or struct.
"Öö Tiib" <ootiib@hot.ee>: Jan 28 01:33AM -0800

On Friday, 25 January 2019 21:26:05 UTC+2, JiiPee wrote:
> > animation. The vector will also allow you to update position too.
 
> its not about the direction or moving... its about having enum in a
> class. how can I use an enum in class...
 
You put it up sort of messy way. Start from basics.
What is your program data structure and relations?
You have data types "texture", "player", "direction" and "animation"?
What other data there is? What is possible state of objects of
those types?
Is some of these types property of other type? How?
Can there be none? many? Always fixed amount or changing?
What of those relations can change during program run?
Then what are the actions with objects of those data types?
What can do what using what?
The "texture" can perhaps be "drawn"?
The "animation" can perhaps be "animated"?
The "player" can perhaps "move" to "direction"?
What other actions are there?
What action is part of what action?
Write that all down in plain prose English first.
Do not make templates before it is clearly compile-
time fixed relation, value or property.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.