Monday, May 24, 2021

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

MrSpook_tt7xetn_st@d3zd4ilmme.gov: May 17 02:56PM

On Mon, 17 May 2021 16:06:48 +0200
>> - "= 0" nonsense for pure virtuals, t
 
>In most OOP-languages you've to declare pure virtual functions.
 
I'm talking about the syntax , not the concept. Wtf is "= 0"? Its an ugly
syntactic hack. They already created the new keyword "virtual" so why didn't
they create another one called "pure"?
 
>> such as "class" for templates until they finally saw sense and brought in
>> "typename" and implicit precedence insanities such as cout << 1 << 2;
 
>Minor cosmetics ...
 
Syntax matters.
 
 
>> - duplicate functionality instead of extending, eg using vs typedef
 
>That's not a problem.
 
Syntax matters.
 
 
>> - context sensitive grammar, eg blah(123) could be an object definition or a
>> function call
 
>That's not a problem.
 
Syntax matters.
 
>> [reasons] but static const? Go right ahead.
 
>You can declare them as inline so you won't need a declaration in
>a .cpp-file.
 
Oh FFS.
MrSpook_yMgsa7x5@3ifr32ivepk32ia.org: May 17 11:29AM

On Mon, 17 May 2021 12:52:26 +0200
 
>> Or you could just iterate through the set and do the comparison
>> manually. 1 line of code.
 
>That's not related to what I said and not even a joke.
 
No its not a joke, its a fact. If you want to find something in std::set
thats not based on the value used for sorting then just iterate, there's
no need for yet another hack added to the language to do it.
Bonita Montero <Bonita.Montero@gmail.com>: May 16 08:01PM +0200

> Now user has to indicate that he wants other operator()
> overloads in Compare to be used instead, otherwise the
> change can surprise break our code silently.
 
How could is_transparent prevent that ? If there's a conversion
-operator for the key-type different from the native key of the
map to the native-key, that conversion is done before any lookup
functions are called.
MrSpook_3l@rsdtj2x7.gov: May 24 08:22AM

On Fri, 21 May 2021 17:19:38 +0100
 
>Specialist hardware? LOL. You appear to have no clue as to how modern
>CPUs work or how operating systems are designed. Modern CPUs have
>MULTIPLE CORES with SHARED CACHES.
 
Nooooo!!! Say it ain't so!!!
 
I'm talking about seperate CPUs gimp boy.
 
>> A) Stop using that idiotic made up word "performant". It doesn't make
>> you sound clever, more like a dumb sheep following the herd.
 
>"Performant" is a perfectly cromulent word, dear.
 
No, its just another nonsense word invented by tech bros when there are
perfectly suitably other words or phrases to use.
 
 
>> involves encrypted data or every user being kicked off if 1 thread
>> crashes you fucking mouth breathing moron!
 
>Word salad, dear.
 
Oh bless, too complicated a concept?
MrSpook_o_4Ge4wpt@2iwg_s7sq47equu9a.edu: May 24 08:25AM

On Fri, 21 May 2021 18:18:02 +0200
 
>The necessities of synchronization are the _same_ for fork()ing and
>MT, i.e. you need to share memory at the same places in the code and
>you need to synchronize also at the same places. But MT is more effi-
 
Yes, congratulations. Thats why a few posts back - which naturally you
didn't read or understand - I said for applications that DON'T require much
or any synchronisation between code paths multi process is better. Eg user
connections.
 
>cient because of the shared address-space - a lot of synchronization
>then doesn't need any kernel-aid - and easier to write.
>Your knowledge is very superficial when it comes to parallelization !
 
Its amusing when people impart a chapter from compsci 101 as some profound
piece of knowledge they're handing down from on high. :)
MrSpook_wBhaki@w1dhf2q0pnagn57.eu: May 24 08:26AM

On Fri, 21 May 2021 16:25:34 GMT
>>Your knowledge is very superficial when it comes to parallelization !
 
>You are both arguing past each other.
 
>Both forking and multithreading are useful in the right situation.
 
Thats all I was saying. But for people who've only ever developed on Windows
all they know is multithreading and see it as the answer to everything.
MrSpook_801o@21xmy7.gov.uk: May 24 08:31AM

On Sat, 22 May 2021 16:25:25 +0200
>> whether trains or planes are best.
 
>Mostly you have a lot of coupling when you have parallel code.
>Then fork()ing is a lot of work and less efficient.
 
That depends on what you need to do. Its fairly simple to have multiple
processes using shared memory with mutexes controlling access. Posix
threads have the PTHREAD_PROCESS_SHARED flag which allows mutexes to work
between seperate processes. Naturally the C++ threading implementation doesn't
(among a log of other useful features its missing) because it has to be lowest
common denominator functionality - ie Windows.
MrSpook_8_g@ef863gl0.eu: May 24 08:34AM

On Mon, 24 May 2021 08:31:25 +0000 (UTC)
>That depends on what you need to do. Its fairly simple to have multiple
>processes using shared memory with mutexes controlling access. Posix
>threads have the PTHREAD_PROCESS_SHARED flag which allows mutexes to work
 
That should read posix mutexes, not threads.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 11:49AM +0200

> That depends on what you need to do. Its fairly simple to have multiple
> processes using shared memory with mutexes controlling access.
 
It's less simple than with MT because the memory-contents have to be
position-independent since you can't guarantee that mappings are placed
at the same address in all participating processes.
 
> Posix threads have the PTHREAD_PROCESS_SHARED flag which allows mutexes
> to work between seperate processes. ...
 
But robust mutexes are siginficantly slower.
 
> Naturally the C++ threading implementation doesn't (among a log of other
> useful features its missing) because it has to be lowest common denominator
> functionality - ie Windows.
 
C++ mutexes implement everything that is needed.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 11:51AM +0200

>> Both forking and multithreading are useful in the right situation.
 
> Thats all I was saying. But for people who've only ever developed on Windows
> all they know is multithreading and see it as the answer to everything.
 
But that's wrong. You can deduce that from the fact that almost
no parallel application is implemented using fork()ing today.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 11:52AM +0200

> Yes, congratulations. Thats why a few posts back - which naturally you
> didn't read or understand - I said for applications that DON'T require much
> or any synchronisation between code paths multi process is better. ...
 
It's always better with threads, but it's only _feasible_
with fork()ing when you have simple patterns.
MrSpook_9v@0v6uq5zonj66dteaq9.ac.uk: May 24 10:15AM

On Mon, 24 May 2021 11:49:36 +0200
 
>It's less simple than with MT because the memory-contents have to be
>position-independent since you can't guarantee that mappings are placed
>at the same address in all participating processes.
 
You absolutely can guarantee it otherwise it wouldn't be any use. Clearly you
don't understand how shared memory works on unix.
 
>> Posix threads have the PTHREAD_PROCESS_SHARED flag which allows mutexes
>> to work between seperate processes. ...
 
>But robust mutexes are siginficantly slower.
 
Probably.
 
>> useful features its missing) because it has to be lowest common denominator
>> functionality - ie Windows.
 
>C++ mutexes implement everything that is needed.
 
No they don't. The C++ threading model is threading-lite. It doesn't even
implement 3 level locking FFS which means 1 -> N publish subscribe models
can't be used efficiently.
MrSpook_rfg_f9lmv@xechfnrqs23y.com: May 24 10:17AM

On Mon, 24 May 2021 11:51:00 +0200
>> all they know is multithreading and see it as the answer to everything.
 
>But that's wrong. You can deduce that from the fact that almost
>no parallel application is implemented using fork()ing today.
 
So you keep saying but you've yet to provide anything to back up your
assertion.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 12:44PM +0200

>> at the same address in all participating processes.
 
> You absolutely can guarantee it otherwise it wouldn't be any use.
> Clearly you don't understand how shared memory works on unix.
 
Of course shared memory is also useful when it isn't mapped to the same
addrress.
 
>> But robust mutexes are siginficantly slower.
 
> Probably.
 
No, really.
 
 
> No they don't. The C++ threading model is threading-lite. It doesn't even
> implement 3 level locking FFS which means 1 -> N publish subscribe models
> can't be used efficiently.
 
Mutexes alone aren't there for procuder-consumer-relationships.
But usually they're used alone when there's no contention when
also employing a CV. A mutex plus a CV is almost the fastest
producer-consumer-mechanism (monitor-objects are the fastest).
Bonita Montero <Bonita.Montero@gmail.com>: May 24 12:45PM +0200

>> no parallel application is implemented using fork()ing today.
 
> So you keep saying but you've yet to provide anything to back up your
> assertion.
 
All open source database servers f.e. employ MT.
Juha Nieminen <nospam@thanks.invalid>: May 24 10:54AM


>> https://www.thezorklibrary.com/history/cruel_puppet.html
 
>> Fair enough?
 
> Ok, I'll stop.
 
They say "don't feed the troll". There seems to be plenty of feeding the
troll in this thread.
 
At least I restricted myself to merely insulting the troll.
David Brown <david.brown@hesbynett.no>: May 24 01:42PM +0200

On 24/05/2021 12:45, Bonita Montero wrote:
 
>> So you keep saying but you've yet to provide anything to back up your
>> assertion.
 
> All open source database servers f.e. employ MT.
 
Postgresql is the biggest, most serious and feature-rich open source
database, AFAIK - and it forks (and - again AFAIK - it does not use
threads at all). It has a strong *nix heritage, where forking is cheap
and threads were late to the party. From the brief things I have read
about its model, the developers said that starting from scratch they
might have used threads, but they would not expect significant
performance benefits.
 
And of course "open source database servers" is a rather small
proportion of "all parallel applications".
 
I'd agree that multi-threading is common today, and many applications
that previously would have been written using forking will now use
threads. In particular, anything that is supposed to work on Windows
will prefer threads because Window's can't do forking, but on *nix the
cost of forking is not usually much different from threading.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 01:54PM +0200

> database, AFAIK - and it forks (and - again AFAIK - it does not use
> threads at all). It has a strong *nix heritage, where forking is cheap
> and threads were late to the party. ...
 
Ok, you're right up to here. But fork()ing is never cheap comapred to
creating a thread. Creating a thread on my Linux Ryzen 7 1800X is about
17.000 clock cycles and I bet fork()ing is magnitudes higher if you
include the costs of making copy-on-writes of every written page.
 
> threads. In particular, anything that is supposed to work on Windows
> will prefer threads because Window's can't do forking, but on *nix the
> cost of forking is not usually much different from threading.
 
Wrong, fork()ing is a pain in the ass when writing parallel
applications. Threading is a mangnitude easier to maintain.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 02:19PM +0200

> creating a thread. Creating a thread on my Linux Ryzen 7 1800X is about
> 17.000 clock cycles and I bet fork()ing is magnitudes higher if you
> include the costs of making copy-on-writes of every written page.
 
I just wrote a little program:
 
#include <unistd.h>
#include <iostream>
#include <chrono>
#include <cstdlib>
#include <cctype>
#include <cstring>
#include <cstdint>
 
using namespace std;
using namespace chrono;
 
int main( int argc, char **argv )
{
using sc_tp = time_point<steady_clock>;
if( argc < 2 )
return EXIT_FAILURE;
uint64_t k = 1'000 * (unsigned)atoi( argv[1] );
sc_tp start = steady_clock::now();
char buf[0x10000];
for( uint64_t n = 0; fork() == 0; ++n )
{
memset( buf, 0, sizeof buf );
if( n == k )
{
double ns = (double)(int64_t)duration_cast<nanoseconds>(
steady_clock::now() - start ).count() / (int64_t)k;
cout << ns << endl;
break;
}
}
}
 
On my Ryzen 7 1800X Linux PC it gives me a fork-cost of about
180.000 to 222.000 clock cycles per fork. Any questions ? I'm
doing a minimal writing in a 64kB-block (hopefully this isn't
optimized away) to simulate some copy-on-write. As you can see
forking is magnitudes slower than creating a thread (17.000
cylces on the same machine).
Bonita Montero <Bonita.Montero@gmail.com>: May 24 03:02PM +0200

> ...(hopefully this isn't optimized away) ...
It was ! And if I use this code:
 
#include <unistd.h>
#include <iostream>
#include <chrono>
#include <cstdlib>
#include <cctype>
#include <cstring>
#include <cstdint>
 
using namespace std;
using namespace chrono;
 
int main( int argc, char **argv )
{
using sc_tp = time_point<steady_clock>;
if( argc < 2 )
return EXIT_FAILURE;
uint64_t k = 1'000 * (unsigned)atoi( argv[1] );
sc_tp start = steady_clock::now();
char volatile buf[0x10000]; // new
for( uint64_t n = 0; fork() == 0; ++n )
{
for( char volatile &c : buf ) // new
c = 0; // new
if( n == k )
{
double ns = (double)(int64_t)duration_cast<nanoseconds>(
steady_clock::now() - start ).count() / (int64_t)k;
cout << ns << endl;
break;
}
}
}
 
The fork()ing cost raises by about 50.000 clock cylces.
Now imagine a the cost of CoW with a large application !
David Brown <david.brown@hesbynett.no>: May 24 03:21PM +0200

On 24/05/2021 13:54, Bonita Montero wrote:
> creating a thread. Creating a thread on my Linux Ryzen 7 1800X is about
> 17.000 clock cycles and I bet fork()ing is magnitudes higher if you
> include the costs of making copy-on-writes of every written page.
 
Copy-on-write takes cycles, of course - but you only need to do the copy
on pages that are written. And you'd need to write the same data in a
multi-threaded implementation.
 
I would expect forking to have slightly more overhead than threading in
a use-case that suits either. I expect threading to be more efficient
if you have a lot of shared data or state (since sharing is the default
in threading), and I expect forking to be more efficient if you need
separation for reliability or security (since separation is the default
for forking).
 
The Postgresql folk are pretty good at this kind of thing, and they
reckon moving to threading would not be worth the effort or the risk in
terms of stability or security. I assume they know what they are
talking about - but I don't assume it applies to all applications.
 
>> cost of forking is not usually much different from threading.
 
> Wrong, fork()ing is a pain in the ass when writing parallel
> applications. Threading is a mangnitude easier to maintain.
 
You come from a Windows world, where forking is not supported. In the
*nix world, it's a different matter.
 
There are pros and cons of both systems. Apache, for example, uses both
on supported systems - it has multiple processes, each with multiple
threads.
 
I am not disagreeing with the idea that multi-threading is often more
efficient than multi-processing, or that it is more common - I am merely
disagreeing with your blanket generalisations about multi-threading
/always/ being better and /always/ being used.
Bonita Montero <Bonita.Montero@gmail.com>: May 24 03:37PM +0200

> Copy-on-write takes cycles, of course - but you only need to do the copy
> on pages that are written. ...
 
Unfortunately you inherit the original fragmented heap of the parent
-process so that there might be a lot of Cows.
 
> in threading), and I expect forking to be more efficient if you need
> separation for reliability or security (since separation is the default
> for forking).
 
Why should forking ever be faster ? There's no reason for this. If
you have decoupled threads which don't synchronize or simpley read
-share memory, they could perorm slightly faster since a context
-switch doesn't always include a TLB-flush.
 
> You come from a Windows world, where forking is not supported.
> In the *nix world, it's a different matter.
 
My statement was a general statement - threaded applications are easier
to write and fork()ing is at best equally performant, but usually less.
 
> efficient than multi-processing, or that it is more common - I am merely
> disagreeing with your blanket generalisations about multi-threading
> /always/ being better and /always/ being used.
 
It's better almost every time.
MrSpook_61Z@avwe8ulg.co.uk: May 24 01:49PM

On Mon, 24 May 2021 15:37:11 +0200
>> In the *nix world, it's a different matter.
 
>My statement was a general statement - threaded applications are easier
>to write and fork()ing is at best equally performant, but usually less.
 
How can threading be easier to write when you have to worry about locking
and race conditions, vs fork() which is fire and forget if you don't need
inter process synchronisation?
 
>> disagreeing with your blanket generalisations about multi-threading
>> /always/ being better and /always/ being used.
 
>It's better almost every time.
 
You're the best example I've seen in a long time of someone who's only tool
is a hammer so to you everything looks like a nail.
MrSpook_1shmhc5pEg@qedprf.biz: May 24 01:57PM

On Mon, 24 May 2021 14:19:57 +0200
>optimized away) to simulate some copy-on-write. As you can see
>forking is magnitudes slower than creating a thread (17.000
>cylces on the same machine).
 
So what? Do you ever manage to understand the point for anything? Is it an
English comprehension problem?
Bonita Montero <Bonita.Montero@gmail.com>: May 17 03:30PM +0200

I don't like reverse-iterators because when converting to a
forward-iterator with .base() you've to adjust it with prev()
so that it points to the same object as the reverse-iterator.
So I found a nice alternative:
 
for( itQueue = m_taskQueue.end(); ; itQueue = itQPrev )
if( itQueue == m_taskQueue.begin() )
return;
else if( (itQPrev = prev( itQueue))->id == queueId )
break;
 
I need itQueue fter the loop and with what I did I don't
need any additional conversions.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: