Friday, February 12, 2021

Digest for comp.lang.c++@googlegroups.com - 24 updates in 5 topics

David Brown <david.brown@hesbynett.no>: Feb 12 08:18AM +0100

On 11/02/2021 22:51, Chris M. Thomasson wrote:
>> the "threading" library and get your multi-processing that way.
 
> Funny that some Python 2 code does not work on Python 3. Always found
> this to be rather odd.
 
/Most/ Python 2 code does not work directly on Python 3 - amongst other
things, "print" was a statement in Python 2, and a function in Python 3.
There were lots of incompatibilities between Python 1.5 and Python 2 as
well, for those that remember that change (Python was not as popular at
that time). The Python "powers that be" consider it acceptable to make
breaking changes between major versions if the changes are a big enough
improvement or fix to the language. They draw the line in a very
different place from, for example, C - with C++ drawing their line
somewhere between these.
mickspud@potatofield.co.uk: Feb 12 11:45AM

On Thu, 11 Feb 2021 17:37:37 +0000
 
>> Any? Hows it going with declarative languages such as SQL or Prolog then?
 
>Yes, any. Why do you feel the need to ask about declarative languages
>specifically?
 
Because beyond the lexical parser they don't break down into the same
execution structures as procedural languages. If you knew anything about
parsing you'd know that.
 
>> Or better yet forget threads and go multiprocess. If you're developing
>> on a proper OS anyway, on Windows the pain isn't worth it I imagine.
 
>You serious, bruv? Threads are essential.
 
Essential for what precisely? Specifically, what can they do that multiprocess
can't?
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 12 02:52PM


> Because beyond the lexical parser they don't break down into the same
> execution structures as procedural languages. If you knew anything about
> parsing you'd know that.
 
Why do you assume that I have no knowledge regarding declarative languages? Do you often make such assumptions when interacting with people you know fuck all about and their projects that you know fuck all about?
 
 
>> You serious, bruv? Threads are essential.
 
> Essential for what precisely? Specifically, what can they do that multiprocess
> can't?
 
I am not in the habit of teaching ignorant, presumptive, arrogant cockwombles the fundamentals, dear.
 
/Flibble
 
--
😎
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 12 02:53PM

On 12/02/2021 07:18, David Brown wrote:
> improvement or fix to the language. They draw the line in a very
> different place from, for example, C - with C++ drawing their line
> somewhere between these.
 
This is why Python will not transition from "toy" status until it gets an ISO Standard.
 
/Flibble
 
--
😎
mickspud@potatofield.co.uk: Feb 12 03:26PM

On Fri, 12 Feb 2021 14:52:01 +0000
>> execution structures as procedural languages. If you knew anything about
>> parsing you'd know that.
 
>Why do you assume that I have no knowledge regarding declarative languages? Do
 
Because you pretend you have a lot of knowledge about a lot of things but
when pressed you tend to come up short and resort to...
 
>you often make such assumptions when interacting with people you know fuck all
>about and their projects that you know fuck all about?
 
..answering questions with a question just like a 2nd rate politician.
 
So lets see an example of your wonder compiler compiling some SQL, Prolog or
other declarative language. I mean since its a universal compiler you'd have
done that, right?
 
>> can't?
 
>I am not in the habit of teaching ignorant, presumptive, arrogant cockwombles
>the fundamentals, dear.
 
Another of your standard responses when you can't answer. But thanks for
playing son.
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 12 04:04PM


>> Why do you assume that I have no knowledge regarding declarative languages? Do
 
> Because you pretend you have a lot of knowledge about a lot of things but
> when pressed you tend to come up short and resort to...
 
Assertions made without evidence can be dismissed with evidence.
 
 
> So lets see an example of your wonder compiler compiling some SQL, Prolog or
> other declarative language. I mean since its a universal compiler you'd have
> done that, right?
 
Since when is SQL ever "compiled"?
 
>> the fundamentals, dear.
 
> Another of your standard responses when you can't answer. But thanks for
> playing son.
 
I don't believe I have ever responded with those exact words before, dear, ergo it cannot be one of my "standard" responses.
 
If you want to learn about threads then I suggest you go back to school, dear.
 
/Flibble
 
--
😎
mickspud@potatofield.co.uk: Feb 12 04:20PM

On Fri, 12 Feb 2021 16:04:05 +0000
 
>> Because you pretend you have a lot of knowledge about a lot of things but
>> when pressed you tend to come up short and resort to...
 
>Assertions made without evidence can be dismissed with evidence.
 
So prove me wrong.
 
>> other declarative language. I mean since its a universal compiler you'd have
>> done that, right?
 
>Since when is SQL ever "compiled"?
 
Oh I dunno, probably since the 1980s. Or do you think RDBMs store procedures
as plain text then parse them each time?
 
 
>I don't believe I have ever responded with those exact words before, dear,
>ergo it cannot be one of my "standard" responses.
 
>If you want to learn about threads then I suggest you go back to school, dear.
 
I've been fully versed in threads for 2 decades buttercup and I've yet to
see anything they can do that multiprocess can't though mileage may vary for
each depending on the use case. However I'm speaking from a unix POV. I guess
if you've only programmed on a toy OS like Windows that can even do something
as fundamental as multiplexing network sockets without threading then I guess
you may well be screwed without them.
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 12 04:33PM

> if you've only programmed on a toy OS like Windows that can even do something
> as fundamental as multiplexing network sockets without threading then I guess
> you may well be screwed without them.
 
It is "UNIX" not "unix", dear. And as far as UNIX-like is concerned: Linux's overcommit/OOM-killer to support fork()ing is a fucking omnishambles; but you wouldn't know this of course as you are fucking clueless anachronism.
 
If you knew anything useful about threads you would know what advantages they have over processes; I repeat: go back to school, dear.
 
/Flibble
 
--
😎
mickspud@potatofield.co.uk: Feb 12 05:12PM

On Fri, 12 Feb 2021 16:33:09 +0000
>On 12/02/2021 16:20, mickspud@potatofield.co.uk wrote:
 
Still waiting for your proof. Take your time.
 
 
>It is "UNIX" not "unix", dear. And as far as UNIX-like is concerned: Linux's
>overcommit/OOM-killer to support fork()ing is a fucking omnishambles; but you
>wouldn't know this of course as you are fucking clueless anachronism.
 
Oh dear, someone tell the child about copy-on-write.
 
If you're going to google stuff and pretend its your own knowledge you might
want to have a clue first. FWIW overcommit is pretty standard amongst OS's of
all colours and has been for decades.
 
>If you knew anything useful about threads you would know what advantages they
>have over processes; I repeat: go back to school, dear.
 
Thanks for proving my point. Have a good w/e with your boyfriend cupcake.
scott@slp53.sl.home (Scott Lurndal): Feb 12 06:18PM

>>overcommit/OOM-killer to support fork()ing is a fucking omnishambles; but you
>>wouldn't know this of course as you are fucking clueless anachronism.
 
>Oh dear, someone tell the child about copy-on-write.
 
copy-on-write and overcommit are two orthogonal concepts.
 
That said, linux allows one to disable overcommit quite easily.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 12 01:08PM -0800

> if you've only programmed on a toy OS like Windows that can even do something
> as fundamental as multiplexing network sockets without threading then I guess
> you may well be screwed without them.
 
Are you familiar with IOCP?
Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk>: Feb 12 10:22PM

> Mr Flibble <flibble@i42.REMOVETHISBIT.co.uk> wrote:
>> On 12/02/2021 16:20, mickspud@potatofield.co.uk wrote:
 
> Still waiting for your proof. Take your time.
 
Proof of what, dear?
 
>> overcommit/OOM-killer to support fork()ing is a fucking omnishambles; but you
>> wouldn't know this of course as you are fucking clueless anachronism.
 
> Oh dear, someone tell the child about copy-on-write.
 
The problem, dear, is when lots of processes start "copy-on-writing" causing over-committed pages to be allocated you run out of memory and Linux starts killing random processes. This is the omnishambles, dear.
 
> If you're going to google stuff and pretend its your own knowledge you might
> want to have a clue first. FWIW overcommit is pretty standard amongst OS's of
> all colours and has been for decades.
 
I haven't googled anything, dear. Perhaps you have and are trying to misdirect? Unsuccessfully?
 
 
>> If you knew anything useful about threads you would know what advantages they
>> have over processes; I repeat: go back to school, dear.
 
> Thanks for proving my point. Have a good w/e with your boyfriend cupcake.
 
You don't appear to have any points of interest, dear.
 
/Flibble
 
--
😎
Marcel Mueller <news.5.maazl@spamgourmet.org>: Feb 12 07:34AM +0100

Am 11.02.21 um 12:17 schrieb David Brown:
> "real" locks - mutexes supplied by the OS - there are priority boosting
> mechanisms to avoid deadlocks or blocks when a high priority thread
> wants a lock that is held by a low priority thread.
 
I worked on similar architectures in the past. (inmos T8xx, hardware
scheduler, high priority not preempted by time slice, guaranteed IRQ
response in less than 1Β΅s at 20 MHz!
 
> takes a lock, and then it is pre-empted (such as by an external
> interrupt from a device) and a high priority thread or interrupt routine
> wants the lock, that high priority thread will spin forever.
 
- Raw spin locks make no sense on /any/ single core.
- Priority inversion applies to /any/ mutex.
 
Spin locks with sleep(0) and when this does not help sleep(1) might be a
compromise. But this is more like a work around rather than a
sophisticated solution since it does not address the origin of the
problem at all.
 
> All this means that if you use atomics in your embedded system,
> everything looks fine and all your testing will likely succeed.
 
Indeed, lock-free is one of the first choices IMHO.
But this might not be sufficient either. CAS loops might spin forever in
some cases if the algorithm does not guarantee forward progress.
 
> But
> there is always the chance of a deadlock happening.
 
Use the C++ standard atomic<T> and check for is_always_lock_free with a
static assertion and you are fine.
 
> loop) on single-core devices is to disable interrupts temporarily. It
> won't help for multi-core systems - there the locking must be done with
> OS help.
 
This is, of course, not covered by the C++ standard. It also implies
other impacts, e.g. making IRQ response times unpredictable.
Personally I dislike this old hack from the 70s/80s.
 
On hardware with strict priorities this might not be required at all
because e.g. an interrupt handler never gets preempted. Of course, you
always need to know the interrupt context where your code is running,
which might be difficult in lower level functions like libraries.
 
 
Marcel
David Brown <david.brown@hesbynett.no>: Feb 12 10:09AM +0100

On 12/02/2021 07:34, Marcel Mueller wrote:
> compromise. But this is more like a work around rather than a
> sophisticated solution since it does not address the origin of the
> problem at all.
 
The aim of this implementation appears to be to make it simple and OS
independent. I don't believe that can be done.
 
>> there is always the chance of a deadlock happening.
 
> Use the C++ standard atomic<T> and check for is_always_lock_free with a
> static assertion and you are fine.
 
There is no problem with the types that are always lock free - the
compiler generates code for these directly. But then, for types that
are always lock free, you generally don't need atomics at all - with a
single core system, "volatile" is fine (plus the occasional memory
barrier) for loads and stores. (And if you stick to the sanity rule of
only writing to a given object in /one/ thread, then RMW operations are
fine too.)
 
The fun comes with bigger types. And on a 32-bit processor, that
includes 64-bit types - which are not uncommon. That is when having
atomic types in the language becomes a big benefit - and it is when they
fail completely with this implementation.
 
(Just for fun, the Cortex-M can /read/ 64-bit data atomically, as the
double-read instruction is restartable. But it can't write it
atomically - an interrupt in the middle of the instruction will leave
the object halfway updated.)
 
>> won't help for multi-core systems - there the locking must be done with
>> OS help.
 
> This is, of course, not covered by the C++ standard.
 
This is an implementation library in a compiler - it is not covered by
the C++ standard either. The standard only says what the code should
do, not how it should do it (and in this case, the code does not work).
 
> It also implies
> other impacts, e.g. making IRQ response times unpredictable.
> Personally I dislike this old hack from the 70s/80s.
 
The "old hack" is far and away the easiest, safest and most efficient
method. IRQ response times are /always/ unpredictable - if you think
otherwise, you have misunderstood the system. The best you can do is
figure out a maximum response time, and even then it may only apply to
the highest priority interrupt. Real time systems are not about saying
when things will happen - they are about giving guarantees for the
maximum delays.
 
Remember, during most interrupt handling functions on most systems,
interrupts are disabled - your maximum interrupt response is already
increased by the time it takes to run your timer interrupt, or UART
interrupt, or ADC interrupt, or whatever other interrupts you have.
These will all be much longer than the time for a couple of loads or
stores. Thus disabling interrupts around atomic accesses does not
affect IRQ response times.
 
 
> because e.g. an interrupt handler never gets preempted. Of course, you
> always need to know the interrupt context where your code is running,
> which might be difficult in lower level functions like libraries.
 
Absolutely true. And that is often how these things get handled in
practice.
 
For example, if you have a 64-bit value that is updated only in an
interrupt routine and never read by anything of higher priority, it can
just write the data directly with two write instructions. If an even
higher priority interrupt breaks in in the middle, that's okay. The
lower priority task that reads that value can do the read in a loop - it
keeps reading the value until it gets two consistent reads.
 
There is no single maximally efficient solution that will work in all
cases - any generic method will be overkill for some use-cases. But a
toolchain-provided generic method that doesn't always work - that is,
IMHO, worse than no implementation.
Marcel Mueller <news.5.maazl@spamgourmet.org>: Feb 12 07:35PM +0100

Am 12.02.21 um 10:09 schrieb David Brown:
> are always lock free, you generally don't need atomics at all - with a
> single core system, "volatile" is fine (plus the occasional memory
> barrier) for loads and stores.
 
Strictly speaking this is not true if you take DMA into account. But
this is not a common use case.
 
> (And if you stick to the sanity rule of
> only writing to a given object in /one/ thread, then RMW operations are
> fine too.)
 
Indeed.
 
> includes 64-bit types - which are not uncommon. That is when having
> atomic types in the language becomes a big benefit - and it is when they
> fail completely with this implementation.
 
Sure, when the hardware does not allow lock free access, then there are
no generic, satisfactory solutions.
 
> double-read instruction is restartable. But it can't write it
> atomically - an interrupt in the middle of the instruction will leave
> the object halfway updated.)
 
In this case yo cannot use DWCAS on this platform. You need to seek for
other solutions. E.g. store and replace a 32 bit pointer to the actual
value.
 
 
> This is an implementation library in a compiler - it is not covered by
> the C++ standard either. The standard only says what the code should
> do, not how it should do it (and in this case, the code does not work).
 
So the library needs to be adjusted platform dependent.
 
> method. IRQ response times are /always/ unpredictable - if you think
> otherwise, you have misunderstood the system. The best you can do is
> figure out a maximum response time,
 
There are systems with guaranteed maximum values.
 
> and even then it may only apply to
> the highest priority interrupt.
 
Of course.
 
> Real time systems are not about saying
> when things will happen - they are about giving guarantees for the
> maximum delays.
 
Exactly. But that is enough.
 
> interrupt, or ADC interrupt, or whatever other interrupts you have.
> These will all be much longer than the time for a couple of loads or
> stores.
 
Context switches can be quite expensive if your hardware has many
registers (including MMU) and no distinct register sets for different
priority levels.
 
> Thus disabling interrupts around atomic accesses does not
> affect IRQ response times.
 
Feel free to do so if it is suitable on your platform. You already
mentioned that this is not sufficient on multi core systems, which
become quite common for embedded systems too nowadays.
 
 
> cases - any generic method will be overkill for some use-cases. But a
> toolchain-provided generic method that doesn't always work - that is,
> IMHO, worse than no implementation.
 
Obviously your particular case is sensitive to priority inversion.
 
But this is always the case when you use libraries. They cover /common
cases/ not all cases.
If a generic atomic library does not guarntee forward progress when used
with different priorities it is not suitable for this case.
 
 
Marcel
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 12 01:16PM -0800

On 2/11/2021 10:34 PM, Marcel Mueller wrote:
> Am 11.02.21 um 12:17 schrieb David Brown:
[...]
> Indeed, lock-free is one of the first choices IMHO.
> But this might not be sufficient either. CAS loops might spin forever in
> some cases if the algorithm does not guarantee forward progress.
 
Side note... Iirc, Joe Seigh, who worked over at IBM, told me that there
was actually a mechanism to prevent infinite live lock in their CAS
operation. I need to send him a email to clarify. Also, iirc, a windows
kernel guy mentioned something kind of similar wrt the Windows SList,
and even allowed for node deletion using SEH.
 
 
[...]
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 12 01:19PM -0800

On 2/12/2021 1:09 AM, David Brown wrote:
>> problem at all.
 
> The aim of this implementation appears to be to make it simple and OS
> independent. I don't believe that can be done.
[...]
 
Agreed. Its to sensitive.
Brian Wood <woodbrian77@gmail.com>: Feb 11 06:09PM -0800

On Thursday, February 11, 2021 at 4:49:32 PM UTC-6, Chris M. Thomasson wrote:
> Perhaps you can use a function pointer to specific code. Say setting the
> function pointer to qlz_compression_level_1 or something. A pure virtual
> base class where you can implement each level separately?
 
This code is from another developer and, to the best of my
knowledge he doesn't provide a C++ version. I don't understand
his code very well so am limiting myself to making minor changes
and then running it by anyone interested.
 
 
Brian
David Brown <david.brown@hesbynett.no>: Feb 12 09:25AM +0100

On 11/02/2021 23:34, Brian Wood wrote:
>> handle both as part of one entity and 'extern "C"' all of it.
 
> I decided to go that route and enlarged what's covered by the
> extern "C".
 
extern "C" language linkage does two things. It keeps the external
linkable names simple - no name mangling (for functions), no namespaces.
And it uses the C ABI for calling conventions, instead of the C++ ABI.
(I don't know of any compilers where this makes a difference - thus
function types and function pointer types are not going to be affected
on any platform where the ABIs match.)
 
But be careful that even within an extern "C" block, class member
declarations and member function type declarations are "C++".
 
<https://en.cppreference.com/w/cpp/language/language_linkage>
 
 
The common uses of extern "C" are either as a wrapper for a whole header
file (or at least the part covering declarations of C functions), or
specifically for one function at a time.
 
 
>

No comments: