Sunday, August 4, 2019

Digest for comp.lang.c++@googlegroups.com - 8 updates in 3 topics

"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Aug 04 02:03PM -0700

On 8/4/2019 12:30 PM, Chris Vine wrote:
> summary wrong (and if so, what's the point of completion ports in the
> first place)? I find it hard to believe microsoft would have gone for
> the one thread per connection option favoured by my respondent.
 
You treat each connection as a little state machine. This state has a
socket, file, whatever, along with a memory buffer, or a pointer into
some memory for io. It gets associated with the IOCP. Create several
worker threads, one or two per CPU, and set the concurrency level to the
number of CPUs. Now, worker threads wait on the IOCP for events. Each
event has the state of the connection. If a read event fires, the memory
buffer has the data, or there might be an error, whatever.
 
Actually, imvho, creating servers with IOCP is fairly straightforward.
Sam <sam@email-scan.com>: Aug 04 05:36PM -0400

Chris M. Thomasson writes:
 
 
> I remember using IOCP back on WinNT 4. The idea of a thread per connection
> simply does not scale. However, creating a thread or two per CPU, and using
> a queue, like IOCP can scale.
 
I agree that "a thread per connection simply does not scale" on Microsoft
Windows. However that's only true for Microsoft Windows, and its crap
multithreading.
 
A thread per connection scales perfectly fine, on Linux. Even with hundreds
of connections. Thread overhead is quite negligible on Linux, and you end up
writing clean, orderly code that runs logically from start to finish,
instead of getting torn into shreds in order to conform to event loop or
callback-based design patterns.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Aug 04 10:43PM +0100

On 04/08/2019 22:36, Sam wrote:
> you end up writing clean, orderly code that runs logically from start to
> finish, instead of getting torn into shreds in order to conform to event
> loop or callback-based design patterns.
 
Nonsense. You should have no more than one logical thread per physical
thread for particular task type.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 04 10:52PM +0100

On Sun, 04 Aug 2019 17:36:12 -0400
> writing clean, orderly code that runs logically from start to finish,
> instead of getting torn into shreds in order to conform to event loop or
> callback-based design patterns.
 
I disagree. And "hundreds of connections" is not good enough.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Aug 04 11:10PM +0100

On 04/08/2019 22:52, Chris Vine wrote:
>> instead of getting torn into shreds in order to conform to event loop or
>> callback-based design patterns.
 
> I disagree. And "hundreds of connections" is not good enough.
 
You are correct to disagree. Sam has obviously never heard of "epoll".
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Aug 05 01:11AM +0200

On 04.08.2019 11:37, Juha Nieminen wrote:
> that are completely mysterious even to me, I have *absolutely no idea*
> what they are. I just don't get it.
 
> I go to the Wikipedia page "Coroutine"... and it tells me nothing.
 
[snip]
 
I'll address the conceptual only.
 
When I saw this posting earlier today I thought I'd whip up a concrete
example, like I implemented coroutines in the mid 1990's. However, I
discovered that I would then be late for a dinner, so I put it on hold.
 
Coroutines are just cooperative multitasking, multitasking with a single
thread of execution but multiple call stacks. When you CALL a coroutine
you create a new instance, with its own stack. There is nothing like
that in the standard library. When a coroutine TRANSFERs to another
coroutine it's like a `longjmp`, except that `longjmp` was designed to
jump up to an earlier point in a call chain, while a coroutine transfer
jumps to a separate call chain.
 
As I recall you're familiar with 16-bit Windows programming.
 
In 16-bit Windows (Windows 3.x in the early 1990s) each program
execution was a coroutine. When a program was launched, a stack area was
allocated for it. That's a call of a coroutine. When the program called
`GetMessage` or `Yield`, some other program instance would get a chance
to continue to run (after waiting for /its/ call to `GetMessage` or
`Yield` to return). That's a coroutine transfer.
 
The 16-bit Windows program execution were /huge/, heavy coroutines.
 
However, the main advantage of coroutines in ordinary programming is
with coroutines as a very light-weight multitasking solution. Since
there's only one thread of execution there are fewer synchronization
issues. In particular one doesn't have to worry about whether one
coroutine sees the memory changes effected by some other coroutine.
 
 
Cheers & hht.,
 
- Alf
aminer68@gmail.com: Aug 04 02:47PM -0700

Hello,
 
 
 
About the memory visibility problem and the solution..
 
As you have noticed i have just spoken about the memory visibility problem that gives also the race condition problem that is NP-hard,
 
i think that to solve it , you have to manage global variables that
are shared with a layer of message passing, and for example the queues of
the message passing will issue a memory barrier so that to force visibility, but there is still a problem because this layer of message passing must detect the shared variables and so that to manage, but this layer needs to do it in an "exponential" time , because i think that this problem is exponential. So there is still a problem.
 
 
Read the rest of my previous thoughts:
 
 
More about race conditions and memory visibility:
 
 
I said previously about the memory visibility problem the following:
 
===============================================================
 
I have come to an interesting subject about memory visibility..
 
As you know that in parallel programming you have to take care
not only of memory ordering , but also take care about memory visibility,
read this to notice it:
 
 
Store Barrier
 
A store barrier, "sfence" instruction on x86, forces all store instructions prior to the barrier to happen before the barrier and have the store buffers flushed to cache for the CPU on which it is issued. This will make the program state "visible" to other CPUs so they can act on it if necessary.
 
I think that this is also the case in ARM CPUs and other CPUs..
 
So as you are noticing that i think that this memory visibility problem is
rendering parallel programming more "difficult" and more "dangerous".
 
 
What do you think about it ?
 
==============================================================
 
 
 
I think this memory visibility problem can give race conditions,
so this is a problem because read this about race conditions:
 
 
NP-hard problem means there is no known algorithm can solve it in a polynomial time, so that the time to find a solution grows exponentially with problem size. Although it has not been definitively proven that, there is no polynomial algorithm for solving NP-hard problems, many eminent mathematicians have tried and failed.
 
Race condition detection is NP-hard
 
Read more here:
 
https://pages.mtu.edu/~shene/NSF-3/e-Book/RACE/difficult.html
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Aug 04 02:24PM -0700

Hello,
 
 
 
More about race conditions and memory visibility:
 
 
I said previously about the memory visibility problem the following:
 
===============================================================
 
I have come to an interesting subject about memory visibility..
 
As you know that in parallel programming you have to take care
not only of memory ordering , but also take care about memory visibility,
read this to notice it:
 
 
Store Barrier
 
A store barrier, "sfence" instruction on x86, forces all store instructions prior to the barrier to happen before the barrier and have the store buffers flushed to cache for the CPU on which it is issued. This will make the program state "visible" to other CPUs so they can act on it if necessary.
 
I think that this is also the case in ARM CPUs and other CPUs..
 
So as you are noticing that i think that this memory visibility problem is
rendering parallel programming more "difficult" and more "dangerous".
 
 
What do you think about it ?
 
==============================================================
 
 
 
I think this memory visibility problem can give race conditions,
so this is a problem because read this about race conditions:
 
 
NP-hard problem means there is no known algorithm can solve it in a polynomial time, so that the time to find a solution grows exponentially with problem size. Although it has not been definitively proven that, there is no polynomial algorithm for solving NP-hard problems, many eminent mathematicians have tried and failed.
 
Race condition detection is NP-hard
 
Read more here:
 
https://pages.mtu.edu/~shene/NSF-3/e-Book/RACE/difficult.html
 
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: