Sunday, February 21, 2021

Digest for comp.lang.c++@googlegroups.com - 10 updates in 3 topics

Bonita Montero <Bonita.Montero@gmail.com>: Feb 21 05:16PM +0100

> I linked you to an algorithm I invented that is loop-free.
> It fairly alternates between batches of readers and writers.
 
It may alternate, but surely not fair.
Richard Damon <Richard@Damon-Family.org>: Feb 21 01:15PM -0500

On 2/21/21 11:16 AM, Bonita Montero wrote:
>> I linked you to an algorithm I invented that is loop-free.
>> It fairly alternates between batches of readers and writers.
 
> It may alternate, but surely not fair.
 
The problem is there isn't a uniform definition of 'fair', but fairness
requires making value decisions.
 
Remind me of a discussion of 'fairness' in scheduling algorithms when I
was in collage. This was in the days when you would key punch your
program and submit the deck to the computer center and get your results
some time later.
 
The question is, which of these algorithms is 'fairer':
 
60 People come up and submit their jobs at basically the same time. Each
Job is going to take 1 minute of CPU time to run. Which of these
scheduling algorithms is 'fairer'
 
Method 1, First job in line run to completion, then the second job is
run to completion, then the 3rd and so on. So the first person gets his
results after a minute, next a minute later, and so on so the last gets
their results 60 minutes later.
 
Method 2, All the jobs are loaded into memory at once, and time sliced
between them, each getting 1 second of time, then the next, and so on.
Every job will finish at the end of the hour.
 
 
Many people will label the second method as 'fairer' as everyone gets
the same pain, but no one in the second example was any better off then
in the first, and almost everyone worse, so by most efficiency measures
it is clearly worse.
 
Now, if all the tasks don't take exactly the same length to run, or
aren't all submitted at once, some of the value arguments of what should
be fair come into play, but 'fairness' can be hard to define.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 21 07:58PM +0100

> the same pain, but no one in the second example was any better off then
> in the first, and almost everyone worse, so by most efficiency measures
> it is clearly worse.
 
You can't compare that to the discussed issue because there's no
opportunity to capture all the CPU-time by one task.
Richard Damon <Richard@Damon-Family.org>: Feb 21 03:58PM -0500

On 2/21/21 1:58 PM, Bonita Montero wrote:
>> it is clearly worse.
 
> You can't compare that to the discussed issue because there's no
> opportunity to capture all the CPU-time by one task.
 
The key point isn't what resource was being managed, but how do we
really define 'fair'. The key part about the example was that many
people thought that the second method was fairer, because there was much
less difference is response times to all clients, even though it was at
the cost of making almost everyone wait longer than they needed to in
the first. The first method has an average response time of 1/2 the
second, and no one got worse performance. Only by a metric on variation
in response can the second be shown to be superior, but some think that
makes it fairer.
 
To define 'fairness' we need to define some metric to measure it.
Normally we can't get 'perfect' fairness except in vary special
conditions. Normally we define a system as 'fair' if every client can
get some reasonable amount of the resource (even if it isn't a 'fair
share'). Tighter definitions of 'fairness' might try to limit the
inequities in allocation (and if the clients are identical in needs, we
need ways to 'fairly' allocate needs to measure inequality). Generally,
attempts to make a system 'fairer' can often impact the efficiency of
the system. In my example, to reduce the variance in the wait times, the
average wait time doubled, and perhaps by increasing the number of
context switches we might have actually increased the total time to run
all the jobs.
 
My key point is that the first step is we need to define what we want to
meet as 'fair', and then we can decide if a given strategy will meet
that definition (and maybe how well).
 
Thus, if you want to call something 'unfair', you need to define what
you mean by fair. It is quite possible to define a RW-lock that will see
to it that as long as no locker fails to release its lock in a
reasonable time period, all requests WILL get met eventually. It just
requires that at some point after a write request gets queue, the system
stops granting new read requests (so the writer will get eventually get
in when all current readers end), and that a somewhat fair method (like
oldest) is used for granting requests when the system is free. Yes,
perhaps the simplest RW-lock will not be able to do this, but then the
issue isn't about RW-locks in general, but that style of
implementations. IF you are using some other definition of 'fair', you
need to define it.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 21 02:35PM -0800

On 2/21/2021 12:58 PM, Richard Damon wrote:
> issue isn't about RW-locks in general, but that style of
> implementations. IF you are using some other definition of 'fair', you
> need to define it.
 
I agree. Fwiw, my rwmutex does not allow readers to starve out writers
and vise versa. So, if a writer requests the lock while there are a shit
load of concurrent readers, it will wait until all of the readers in the
current "batch" are completed and will automatically own the lock for
write access after that. If a batch of readers come in while the writer
is doing its thing, they will be released all at once and automatically
own reader access once the writer is finished. This way, readers cannot
stave out writers and vise versa. Now, if one does not desire that type
of alternating, basically a classic bakery algorihtm like behavior, then
do NOT use my rwmutex.
 
https://vorbrodt.blog/2019/02/14/read-write-mutex
 
I like how he gave me proper credit. Nice. :^)
 
Actually, my rwmutex is pretty short and sweet with no loops.
Manfred <noname@add.invalid>: Feb 17 07:43PM +0100

On 2/17/2021 5:30 PM, David Brown wrote:
> movl 1(%rdi), %eax
> ret
 
> Exactly which cache lines are "evicted" by the use of memcpy here?
 
I think this still does not solve the issue at the language level.
The standard does not mandate the behaviour that is shown by your
example, so even if compilers do compensate for inefficiency of the
code, this does not make the language good.
 
Looking at your first and second example above there is no reason for
which one should prefer getb2 over getb1, in fact getb2, as written, is
less efficient than getb1 because it introduces some unneeded extra
storage and an extra function call - albeit at the level of the abstract
machine, with no benefit in readability or robustness against bugs.
If the language somehow requires to use getb2 instead of getb1, I see
this as an inefficiency in the language.
The fact that the compiler puts a remedy to this by applying some
operation that is hidden to the language specification does not make the
language itself any more efficient.
mickspud@potatofield.co.uk: Feb 17 09:03AM

On Tue, 16 Feb 2021 22:54:27 -0500
>> Memory is memory, it doesn't have a type.
 
>The C standard says otherwise, and so does the C++ standard. That type
>is kept track of by the compiler, not by the hardware, but it does have
 
As I said below.
 
>That would make things very easy - dereferencing a pointer to void has
 
Good luck with that.
 
>> Unions are another matter entirely mainly because endianess issues tend to
>> occur with them regardless of memory alignment.
 
>Unions are very much the same matter - almost every use of a union to
 
Yes I know, do try and understand idiom.
 
>you aren't aware of how to write your code to avoid violating the
>anti-aliasing rules, there's probably many other rules that you also
>don't know how to avoid violating.
 
I'm getting tired of this patronising crap from pedants. I've NEVER seen
type punning fail so long as pragmas were used correctly regardless of
optimisation but if any of you can provide a simple example where it does
then feel free to post it instead of a lot of hand waving vagueries and
appeals to "the standard".
Anton Shepelev <anton.txt@gmail.com>: Feb 21 02:54PM +0300

Chris M. Thomasson:
 
> > Well, I have personally included a .c file.
> ^^^^^^^^^^^^
> NEVER
 
A freudian slip. Now we now what coding practices Mr.
Thomasson indulges in, when in private :-)
 
--
() ascii ribbon campaign -- against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 21 02:15PM -0800

On 2/21/2021 3:54 AM, Anton Shepelev wrote:
>> NEVER
 
> A freudian slip. Now we now what coding practices Mr.
> Thomasson indulges in, when in private :-)
 
lol! :^D
 
I just never found the need to include .c file. However I have had to
work on code that did do that. Shit happens. :^)
Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Feb 18 08:47PM -0500

Is it possible to define Standard C++ Library Container Types UM, OM and L such
that:
 
- UM is an std::unordered_map with key_type int and mapped_type iterator to OM
- OM is an std::map with key_type int and mapped_type L
- L is an std::list of iterators to UM
 
?
 
TIA,
-Pavel.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: