Sunday, August 4, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

Juha Nieminen <nospam@thanks.invalid>: Aug 04 09:37AM

C++20 will introduce support for coroutines. To this day, for reasons
that are completely mysterious even to me, I have *absolutely no idea*
what they are. I just don't get it.
 
I go to the Wikipedia page "Coroutine"... and it tells me nothing.
 
"Coroutines are computer program components that generalize subroutines
for non-preemptive multitasking, by allowing execution to be suspended
and resumed. Coroutines are well-suited for implementing familiar program
components such as cooperative tasks, exceptions, event loops, iterators,
infinite lists and pipes."
 
This tells me nothing. I don't get it.
 
"By contrast, coroutines can exit by calling other coroutines, which may
later return to the point where they were invoked in the original
coroutine; from the coroutine's point of view, it is not exiting but
calling another coroutine."
 
Can exit by calling other coroutines... And then the execution later
resumes from that calling point. This is not "exiting" but "yielding",
whatever that's supposed to mean.
 
Sounds like a normal function call to me, but what do I know? I can't
understand what this is saying. It's like it's speaking in legible
English, yet it somehow is complete gibberish at the same time. I don't
get it.
 
It then gives a pseudocode example of coroutines... that look exactly
like a function calling another (which then just returns to the
original caller).
 
==========================================
var q := new queue
 
coroutine produce
loop
while q is not full
create some new items
add the items to q
yield to consume
 
coroutine consume
loop
while q is not empty
remove some items from q
use the items
yield to produce
==========================================
 
This looks to me exactly like
 
==========================================
var q := new queue
 
coroutine produce
loop
while q is not full
create some new items
add the items to q
call consume
 
coroutine consume
while q is not empty
remove some items from q
use the items
return
==========================================
 
But what do I know? I don't get it.
 
Ok, Wikipedia is just a generic article about coroutines. Maybe
https://en.cppreference.com/w/cpp/language/coroutines has a better
explanation. It's done from the perspective of C++ in particular,
after all.
 
"A coroutine is a function that can suspend execution to be resumed
later. Coroutines are stackless: they suspend execution by returning
to the caller. This allows for sequential code that executes
asynchronously (e.g. to handle non-blocking I/O without explicit
callbacks), and also supports algorithms on lazy-computed infinite
sequences and other uses."
 
A function that can suspend execution to be resumed later...
Yeah, I don't get it. Allows sequential code that executes
asynchronously... What? Supports algorithms on lazy-computed infinite
sequences... Ok, but what does this all mean? I don't get it.
 
The examples given are complete gibberish. I have *absolutely no
idea* what something like this is supposed to mean:
 
co_await async_write(socket, buffer(data, n));
 
co_await? What?
 
It seems that everybody else is awaiting (hah!) and praising coroutines
like the second coming of Christ. Yet I just can't understand what they
are, or how exactly they are used or useful. All web pages I can find
are legible English that still somehow manages to be completely meaningless
gibberish that tells me nothing.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Aug 04 02:58AM -0700

On 8/4/2019 2:37 AM, Juha Nieminen wrote:
> C++20 will introduce support for coroutines. To this day, for reasons
> that are completely mysterious even to me, I have *absolutely no idea*
> what they are. I just don't get it.
[...]
 
They are like Fibers on Windows:
 
https://docs.microsoft.com/en-us/windows/win32/procthread/fibers
 
http://man7.org/linux/man-pages/man3/getcontext.3.html
 
https://docs.microsoft.com/en-us/windows/win32/procthread/user-mode-scheduling
Louis Krupp <lkrupp@nospam.pssw.com.invalid>: Aug 04 04:58AM -0600

On Sun, 4 Aug 2019 09:37:39 -0000 (UTC), Juha Nieminen
 
>C++20 will introduce support for coroutines. To this day, for reasons
>that are completely mysterious even to me, I have *absolutely no idea*
>what they are. I just don't get it.
<snip>
 
It might help to look at coroutines in another language, say Lua.
 
A main program and a coroutine cooperate like this:
 
The main program creates the coroutine object, associating it with a
function to be executed. At some point, the main program hands control
to the coroutine, possibly passing it arguments, and then waits for
the coroutine to either yield (or return) and possibly pass back
results. When the main program gets around to it, it resumes the
coroutine and waits again for it to yield or return.
 
The coroutine wakes up when it's told, possibly accepting arguments,
and then does whatever it wants until it decides it's time to either
yield control back to the main program or return, possibly passing
results back to the main program. A coroutine that yields waits until
it's told to resume by the main program; a coroutine that returns
simply goes away and cannot be told to resume.
 
Here's a simple example of a program that does a couple of trivial
computations and then quits:
 
===
 
-- Function to run in coroutine
--
function cf(n)
print("coroutine beginning n = ", n)
k = n * n
print("coroutine yielding, returning ", k)
coroutine.yield(k)
 
print("coroutine resuming")
k = k * n
print("coroutine yielding, returning", k)
coroutine.yield(k)
 
print("coroutine resuming")
k = -1
print("coroutine all done, returning", k)
return(k)
end
 
-- Main program
--
print("main creating coroutine")
co = coroutine.create(cf)
 
print("main telling coroutine to resume (actually, to start)")
print("main waiting for coroutine to yield or return")
status, result = coroutine.resume(co, 7)
print("main result from coroutine = ", result)
 
print("main telling coroutine to resume")
print("main waiting for coroutine to yield or return")
status, result = coroutine.resume(co)
print("main result from coroutine = ", result)
 
print("main telling coroutine to resume")
print("main waiting for coroutine to yield or return")
status, result = coroutine.resume(co)
print("main result from coroutine = ", result)
 
===
 
The output:
 
main creating coroutine
main telling coroutine to resume (actually, to start)
main waiting for coroutine to yield or return
coroutine beginning n = 7
coroutine yielding, returning 49
main result from coroutine = 49
main telling coroutine to resume
main waiting for coroutine to yield or return
coroutine resuming
coroutine yielding, returning 343
main result from coroutine = 343
main telling coroutine to resume
main waiting for coroutine to yield or return
coroutine resuming
coroutine all done, returning -1
main result from coroutine = -1
 
I hope this helps a little.
 
Louis
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 04 02:08PM +0100

On Sun, 4 Aug 2019 09:37:39 -0000 (UTC)
Juha Nieminen <nospam@thanks.invalid> wrote:
[snip]
> are, or how exactly they are used or useful. All web pages I can find
> are legible English that still somehow manages to be completely meaningless
> gibberish that tells me nothing.
 
You have summarized coroutines quite well in your posting, and I
shouldn't get too hung up on one particular implementation (the one
which may be included in C++20).
 
The easiest coroutines to understand, and the ones used to implement
"await" semantics on asynchronous operations, are asymmetrical
coroutines implemented using delimited continuations. These normally
have only two control-flow operations, namely a "yield" or "suspend"
operation, and a "resume" operation. When a function executes a
"yield" or "suspend" operation, it returns control to its caller,
somewhat like a return statement. However, unlike a return statement,
a "resume" operation on the coroutine in question will return program
execution to the point at which the yield previously occurred.
 
Asynchronous programming with callbacks normally works so that when a
i/o operation might block because the resource in question (read or
write) is unavailable, instead of blocking the function returns
immediately and registers a callback with the program's event loop
(usually based on select() or poll()), representing the i/o operation's
continuation when the resource becomes available. Programming this way
using callbacks in the style of, say, Node.js is troublesome because it
gives rise to what is sometimes called inversion of control, aka
"callback hell". In particular, control flow ceases to appear as a
sequential series of i/o operations which can easily be reasoned about.
 
Coroutines can offer a way out because when an i/o resource is
unavailable the i/o operation concerned can (i) register a callback
with the event loop which resumes the operation when the resource
becomes available, and then (ii) suspends to the caller (which
indirectly is the event loop). When the resource becomes available the
event loop resumes the coroutine at the point of suspension, so
"tricking" the control flow so that it appears to the coder to be
sequential (like blocking i/o) even though it is in fact executing
asynchronously via an event loop.
 
The scheme language has first-class continuations built into the
language which makes implementing coroutines relatively trivial. Here
is one library which uses asymmetrical coroutines for the purposes
mentioned above which may or may not help:
https://github.com/ChrisVine/chez-a-sync/wiki .
ECMAscript and python also now have somewhat more limited asymmetrical
coroutines.
Sam <sam@email-scan.com>: Aug 04 09:17AM -0400

Juha Nieminen writes:
 
> This tells me nothing. I don't get it.
 
You are not alone. I had the same reaction.
 
After reading all the available documentation on it, I believe that
coroutines are a solution in search of a problem.
Sam <sam@email-scan.com>: Aug 04 09:18AM -0400

Chris M. Thomasson writes:
 
>> what they are. I just don't get it.
> [...]
 
> They are like Fibers on Windows:
 
That's because Windows always sucked at multithreading.
Sam <sam@email-scan.com>: Aug 04 09:23AM -0400

Chris Vine writes:
 
> gives rise to what is sometimes called inversion of control, aka
> "callback hell". In particular, control flow ceases to appear as a
> sequential series of i/o operations which can easily be reasoned about.
 
This problem was solved a long time ago. It is called "threads".
 
The funniest thing about this is that Gnome is most extreme case of this.
The entire Gnome stack is just one event dispatching main loop. Everything
is callbacks. But since Gnome is C code, none of this will help it, that's
the funny part.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 04 02:36PM +0100

On Sun, 04 Aug 2019 09:23:52 -0400
> The entire Gnome stack is just one event dispatching main loop. Everything
> is callbacks. But since Gnome is C code, none of this will help it, that's
> the funny part.
 
Threads are very definitely not the answer. Threads work well with
cpu-bound workloads. They are hopeless with i/o-bound workloads, which
is why co-operative (possibly single-threaded) multi-tasking using
coroutines has become popular.
 
If you use native OS threads for i/o bound workloads, you are doing it
wrong.
Sam <sam@email-scan.com>: Aug 04 10:31AM -0400

Chris Vine writes:
 
> cpu-bound workloads. They are hopeless with i/o-bound workloads, which
> is why co-operative (possibly single-threaded) multi-tasking using
> coroutines has become popular.
 
So instead of simply spawning off a separate thread whose only job is a
laughably simple loop that wait for I/O to happen, and that do something
simple with it, like saving some data that's read from the socket into a
file, it's much easier to build another rube-goldbergian event-loop based
contraption, and the entire application now must be dragged, kicking and
screaming, into rebuilding under that framework? That's the easier approach?
 
Coroutines are not going to be better at this, either.
 
> If you use native OS threads for i/o bound workloads, you are doing it
> wrong.
 
Nope. One only needs to understand that I/O bound workloads is not the only
reason that threads exist.
Robert Wessel <robertwessel2@yahoo.com>: Aug 04 10:12AM -0500

On Sun, 4 Aug 2019 09:37:39 -0000 (UTC), Juha Nieminen
>are, or how exactly they are used or useful. All web pages I can find
>are legible English that still somehow manages to be completely meaningless
>gibberish that tells me nothing.
 
 
The problem is that this is a bit oversimplified:
 
var q := new queue
 
coroutine produce
loop
while q is not full
create some new items
add the items to q
yield to consume
 
coroutine consume
loop
while q is not empty
remove some items from q
use the items
yield to produce
 
It's not wrong, but rather it's been reduced to so simple a form that
coroutines are not a great help. That's one of the classic "problems"
with coroutines - there's not a trivial example of why you'd want
them.
 
Consider instead:
var z := ...
 
coroutine produce
loop
...code...
loop
...code...
create z
yield to consume
...code...
endloop
...code...
endloop
 
coroutine consume
loop
...code...
loop
...code...
yield to produce
process z
...code...
endloop
...code...
endloop
 
Now try unwinding those into a simple subroutine call.
 
The point is that the producer can generate the output at any
convenient point (that's not different from having the producer call
the consumer at a point convenient for the producer), but also the
*consumer* can acquire the item at a point convenient for the
*consumer*.
 
Consider a program that processes an input stream - it would normally
call getc() (or whatever) at various places, probably several nested
subroutines and/or loops deep. Consider how much more convoluted the
code would be if instead the OS called main() with every character
(and yes, that's the model many event driven GUIs use).
 
The point of a coroutine is that it can "suspend" at some arbitrary
point in its logic (say a dozens subroutine calls and loops deep, plus
a bunch of state from local variables), and then continue from that
point (still those dozen subroutine calls and loops deep, with state
intact), rather than at the beginning of the routine again.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 04 04:42PM +0100

On Sun, 04 Aug 2019 10:31:02 -0400
> file, it's much easier to build another rube-goldbergian event-loop based
> contraption, and the entire application now must be dragged, kicking and
> screaming, into rebuilding under that framework? That's the easier approach?
 
You have it wrong.
 
For toy programs or for cases where there are only a few concurrent i/o
operations running at any one time then native threads are fine. But
using native OS threads with blocking i/o on i/o-bound workloads does
not scale. No one writes a server these days with one thread per
connection, which would be required for your blocking i/o: instead you
multiplex connections with poll/epoll/kqueue and tune the number of
threads to the workload and the number of cores available. You can have
millions of C++20-style stackless coroutines running at any one time
on a thread, although you are going to run out of file descriptors
before any such limit is reached (I see that my relatively modest 8 core
machine has a hard maximum of 805038 open descriptors although other
things would probably break before then).
 
> > wrong.
 
> Nope. One only needs to understand that I/O bound workloads is not the only
> reason that threads exist.
 
Clueless.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Aug 04 05:57PM +0100

On 04/08/2019 10:37, Juha Nieminen wrote:
> are, or how exactly they are used or useful. All web pages I can find
> are legible English that still somehow manages to be completely meaningless
> gibberish that tells me nothing.
 
This is why coroutines should be banned: the average programmer simply
doesn't understand them. This state of affairs is probably partly due to
the pervasive understanding that the abstract machine has a stack. C++ is
complicated enough.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Aug 04 11:15AM -0700

On 8/4/2019 6:18 AM, Sam wrote:
>> [...]
 
>> They are like Fibers on Windows:
 
> That's because Windows always sucked at multithreading.
 
lol. ;^)
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Aug 04 11:20AM -0700

On 8/4/2019 6:36 AM, Chris Vine wrote:
> coroutines has become popular.
 
> If you use native OS threads for i/o bound workloads, you are doing it
> wrong.
 
Threads can work fairly well with IO, take a look at IOCP:
 
https://docs.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Aug 04 08:30PM +0100

On Sun, 4 Aug 2019 11:20:51 -0700
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>
wrote:
> > wrong.
 
> Threads can work fairly well with IO, take a look at IOCP:
 
> https://docs.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports
 
I don't know microsoft's implementations, but as far as I can make out
the i/o is still done asynchronously (the file operations themselves do
not block) but when an asynchronous operation completes that completion
is handled by the completion port. Although threads can wait on the
completion port for an i/o operation to reach completion, there appears
to be some form of multiplexing so that one thread can handle a number
of completions, and the documentation suggests that having a number of
threads equal to the number of cores (as opposed to the number of
connections) as a good starting point, which seems reasonable.
 
But I cannot say the documentation you refer to is that clear: is my
summary wrong (and if so, what's the point of completion ports in the
first place)? I find it hard to believe microsoft would have gone for
the one thread per connection option favoured by my respondent.
scott@slp53.sl.home (Scott Lurndal): Aug 04 07:46PM

>cpu-bound workloads. They are hopeless with i/o-bound workloads, which
>is why co-operative (possibly single-threaded) multi-tasking using
>coroutines has become popular.
 
I disagree. So do the Open Data Plane (ODP) folks. Threads work very
well with I/O bound workloads when used properly. Last time I found a
coroutine useful was in PDP-11 assembler (JMS @(SP)+).
Sam <sam@email-scan.com>: Aug 04 04:43PM -0400

Chris Vine writes:
 
> operations running at any one time then native threads are fine. But
> using native OS threads with blocking i/o on i/o-bound workloads does
> not scale.
 
Says who?
 
> connection, which would be required for your blocking i/o: instead you
> multiplex connections with poll/epoll/kqueue and tune the number of
> threads to the workload and the number of cores available.
 
Maybe you just don't know many people who write server code.
 
> millions of C++20-style stackless coroutines running at any one time
> on a thread, although you are going to run out of file descriptors
> before any such limit is reached
 
Sure, and since coroutines introduce an additional level of complexity –
something also needs to be aware and keep track of which coroutine has
something that needs to be done, instead of efficiently encapsulating all
I/O handling logic in a simple, straightforward, execution thread; and no
other part of the code needs to give a shit – it still unclear what problem
they're trying to solve.
 
Modern operating systems have no problems scaling to thousands, and maybe
even tens of thousands of execution threads. Perhaps you don't have much
exposure to them, and are only exposed to technically flawed operating
system, with utterly crappy thread implementations?
 
 
> > Nope. One only needs to understand that I/O bound workloads is not the only
> > reason that threads exist.
 
> Clueless.
 
As opposed to someone who religiously believes that the only reason one
someone would use threads is in CPU-bound code?
 
Maybe you should talk to Chrome and Firefox folks, and teach them how to
write code, since you know so much about it, and explain how stupid they
are, creating dozens of threads even when most of their load is I/O bound.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Aug 04 01:58PM -0700

On 8/4/2019 1:43 PM, Sam wrote:
>> multiplex connections with poll/epoll/kqueue and tune the number of
>> threads to the workload and the number of cores available.
 
> Maybe you just don't know many people who write server code.
 
I remember using IOCP back on WinNT 4. The idea of a thread per
connection simply does not scale. However, creating a thread or two per
CPU, and using a queue, like IOCP can scale.
 
 
aminer68@gmail.com: Aug 04 01:40PM -0700

Hello,
 
 
About memory visibility..
 
 
I have come to an interesting subject about memory visibility..
 
As you know that in parallel programming you have to take care
not only of memory ordering , but also take care about memory visibility,
read this to notice it:
 
 
Store Barrier
 
A store barrier, "sfence" instruction on x86, forces all store instructions prior to the barrier to happen before the barrier and have the store buffers flushed to cache for the CPU on which it is issued. This will make the program state "visible" to other CPUs so they can act on it if necessary.
 
I think that this is also the case in ARM CPUs and other CPUs..
 
So as you are noticing that i think that this memory visibility problem is
rendering parallel programming more "difficult" and more "dangerous".
 
 
What do you think about it ?
 
 
 
Thank you,
Amine Moulay Ramdane.
Szyk Cech <szykcech@spoko.pl>: Aug 04 07:11PM +0200

Hi!
I want to use SOCI 3.x library with ancient C++ compiler. So:
What C++ version is required by SOCI 3.x?!?
 
When I read doc (form the page:
http://soci.sourceforge.net/doc/3.2/installation.html )
I notice strange comple command exampes:
$ mkdir build
$ cd build
$ cmake -G "Unix Makefiles" -DWITH_BOOST=OFF -DWITH_ORACLE=OFF (...)
../soci-X.Y.Z
$ make
$ make install
 
Whitch all disables Boost. So I am wonder:
When Boost is required by SOCI 3.x?!?
 
thanks in advance and best regards
Szyk Cech
M Powell <forumsmp@gmail.com>: Aug 03 07:53PM -0700

For starters I apologize if I'm in the wrong forum.
A colleague is using asio IO to transfer messages between two applications. The message size is 300 bytes and one of the apps is leveraging a periodic timer predicated on chrono so the message is sent on local host at intervals (1 hertz, 10 hertz ...up to 10K hertz)
 
For example:
app 1 is set to transfer the message @ 100 Hz
app 1 @T0 increments a counter within the message then transfers the message
 
app 2 receives the message, verifies counter was incremented before incrementing the counter and sending the message
 
The process repeats periodically at the interval specified. app1 has metrics that tracks transfer rate and dropped messages.
 
 
At issue. The 10k hertz was flagged as questionable by a few members during a review. The code executes on Windows and Linux and there was added skepticism WRT windows. The outputs didn't revel any performance issues with 10K Herz transfers on either windows or Linux.
 
My question. Is there a limitation in TCP transactions or asio that would preclude 10K hertz transfers ? The OS windows ?
 
It's unclear to me what the potential issues are/could be and I can't seem to find anything from online search about threshold and metrics on tcp transactions so thought I'd throw it out here to get perspectives.
 
Thanks in advance.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Aug 04 06:01AM +0200

On 04.08.2019 04:53, M Powell wrote:
 
> At issue. The 10k hertz was flagged as questionable by a few members during a review. The code executes on Windows and Linux and there was added skepticism WRT windows. The outputs didn't revel any performance issues with 10K Herz transfers on either windows or Linux.
 
> My question. Is there a limitation in TCP transactions or asio that would preclude 10K hertz transfers ? The OS windows ?
 
> It's unclear to me what the potential issues are/could be and I can't seem to find anything from online search about threshold and metrics on tcp transactions so thought I'd throw it out here to get perspectives.
 
You are indeed in the wrong forum.
 
Anyway...
 
If the network has sufficient bandwidth then I don't see what the
problem is.
 
However, the old ADSL connection I'm using to post this is pretty slow,
like 12 Mbps for downloads. That's too slow for your 10 kHz => 10 000 *
300 * ~10 = 30 Mbps minimum requirement. It has nothing to do with the
OS and all to do with the connection.
 
Disclaimer: I haven't done any low level network things since the 1990's
so the ~10 factor instead of 8 bits per byte is roughly the overhead of
serial cable communication, and the overhead of a modern network
connection, its speed requirement, is probably waaaaaaay higher.
 
 
Cheers!,
 
- Alf
Melzzzzz <Melzzzzz@zzzzz.com>: Aug 04 04:12AM


> For starters I apologize if I'm in the wrong forum. A colleague is
> using asio IO to transfer messages between two applications.
 
Whats asio IO?
 
> performance issues with 10K Herz transfers on either windows or Linux.
 
> My question. Is there a limitation in TCP transactions or asio that
> would preclude 10K hertz transfers ? The OS windows ?
 
What's asio?
 
> seem to find anything from online search about threshold and metrics
> on tcp transactions so thought I'd throw it out here to get
> perspectives.
 
On loopback interface or LAN not a problem, but what's asio?
 
 
> Thanks in advance.
You are welcome.
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
Cholo Lennon <chololennon@hotmail.com>: Aug 04 01:47AM -0300

On 8/3/19 11:53 PM, M Powell wrote:
 
> For starters I apologize if I'm in the wrong forum.
> A colleague is using asio IO to transfer messages between two applications. The message size is 300 bytes and one of the apps is leveraging a periodic timer predicated on chrono so the message is sent on local host at intervals (1 hertz, 10 hertz ...up to 10K hertz)
 
I assume that your are talking about Boost.Asio. The right forum for
that is news://news.gmane.org/gmane.comp.lib.boost.user
 
More information here: https://www.boost.org/community/groups.html
 
 
> My question. Is there a limitation in TCP transactions or asio that would preclude 10K hertz transfers ? The OS windows ?
 
> It's unclear to me what the potential issues are/could be and I can't seem to find anything from online search about threshold and metrics on tcp transactions so thought I'd throw it out here to get perspectives.
 
> Thanks in advance.
 
Best Regards
 
--
Cholo Lennon
Bs.As.
ARG
Paavo Helde <myfirstname@osa.pri.ee>: Aug 04 03:57PM +0300

On 4.08.2019 5:53, M Powell wrote:
 
> The process repeats periodically at the interval specified. app1 has metrics that tracks transfer rate and dropped messages.
 
> At issue. The 10k hertz was flagged as questionable by a few members during a review. The code executes on Windows and Linux and there was added skepticism WRT windows. The outputs didn't revel any performance issues with 10K Herz transfers on either windows or Linux.
 
> My question. Is there a limitation in TCP transactions or asio that would preclude 10K hertz transfers ? The OS windows ?
 
I'm pretty sure the problematic points would not be the TCP or network
stack, but rather the overall performance of the computer, and
especially the variations in its performance. And the 10 kHz number is
not something special, similar problems are there always when you
require something to happen during some fixed time.
 
A consumer-grade OS like Windows or Linux does not guarantee your
program gets a timeslice for running during each 0.1 ms (i.e. 10 kHz).
For hard guarantees one should use some real-time OS instead. In
Windows/Linux one must accept the possibility of occasional slowdowns,
and code accordingly.
 
The slowdowns may sometimes be severe. If the computer gets overloaded
with too many tasks or too large memory consumption, it may slow to
crawl. Your program might not get a timeslice even during a whole
second, not to speak about milliseconds. There are some ways to mitigate
this by playing with thread/process priorities and locking pages in RAM,
which might help to an extent.
 
Experience shows Windows tends to be more prone to this "slowing to
crawl" behavior than Linux, and what's worse, it appears to recover from
such situations much worse, if at ever. After there has been a memory
exhaustion event, often the only way to get Windows to function normally
again is a restart. A typical Windows box might need a restart after
every few weeks anyway, to restore its original performance.
 
In Windows, you also do not know when an antivirus or another too snoopy
spyware decides to install itself in the network loopback interface and
start monitoring all the traffic, causing unpredictable delays. I recall
in one Windows product we replaced the loopback TCP connection by a
solution using shared memory, because there appeared to be random delays
in TCP which we were not able to explain or eliminate. YMMV, of course.
 
In short, if you have full control over the hardware and installed
software on the machine where your program is supposed to work, and have
verified it by prolonged testing, and you can accept occasional loss of
traffic, and can ensure daily restarts of Windows, then you should
probably be OK.
 
Another way is to accept there might be functionality loss and tell the
user to fix any problems. Your task is a bit similar to audio playing,
although audio is a bit easier as important frequencies are way below 10
kHz, and for audio there is special hardware support as well (which I do
not know much about though). Now let's say if an .mp3 file does not play
well because of computer overload, the user can try to close other apps
or restart the machine. If this would be OK for your users you should be
fine as well.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: