Monday, February 22, 2021

Digest for comp.programming.threads@googlegroups.com - 14 updates in 1 topic

Bonita Montero <Bonita.Montero@gmail.com>: Feb 21 05:16PM +0100

> I linked you to an algorithm I invented that is loop-free.
> It fairly alternates between batches of readers and writers.
 
It may alternate, but surely not fair.
Richard Damon <Richard@Damon-Family.org>: Feb 21 01:15PM -0500

On 2/21/21 11:16 AM, Bonita Montero wrote:
>> I linked you to an algorithm I invented that is loop-free.
>> It fairly alternates between batches of readers and writers.
 
> It may alternate, but surely not fair.
 
The problem is there isn't a uniform definition of 'fair', but fairness
requires making value decisions.
 
Remind me of a discussion of 'fairness' in scheduling algorithms when I
was in collage. This was in the days when you would key punch your
program and submit the deck to the computer center and get your results
some time later.
 
The question is, which of these algorithms is 'fairer':
 
60 People come up and submit their jobs at basically the same time. Each
Job is going to take 1 minute of CPU time to run. Which of these
scheduling algorithms is 'fairer'
 
Method 1, First job in line run to completion, then the second job is
run to completion, then the 3rd and so on. So the first person gets his
results after a minute, next a minute later, and so on so the last gets
their results 60 minutes later.
 
Method 2, All the jobs are loaded into memory at once, and time sliced
between them, each getting 1 second of time, then the next, and so on.
Every job will finish at the end of the hour.
 
 
Many people will label the second method as 'fairer' as everyone gets
the same pain, but no one in the second example was any better off then
in the first, and almost everyone worse, so by most efficiency measures
it is clearly worse.
 
Now, if all the tasks don't take exactly the same length to run, or
aren't all submitted at once, some of the value arguments of what should
be fair come into play, but 'fairness' can be hard to define.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 21 07:58PM +0100

> the same pain, but no one in the second example was any better off then
> in the first, and almost everyone worse, so by most efficiency measures
> it is clearly worse.
 
You can't compare that to the discussed issue because there's no
opportunity to capture all the CPU-time by one task.
Richard Damon <Richard@Damon-Family.org>: Feb 21 03:58PM -0500

On 2/21/21 1:58 PM, Bonita Montero wrote:
>> it is clearly worse.
 
> You can't compare that to the discussed issue because there's no
> opportunity to capture all the CPU-time by one task.
 
The key point isn't what resource was being managed, but how do we
really define 'fair'. The key part about the example was that many
people thought that the second method was fairer, because there was much
less difference is response times to all clients, even though it was at
the cost of making almost everyone wait longer than they needed to in
the first. The first method has an average response time of 1/2 the
second, and no one got worse performance. Only by a metric on variation
in response can the second be shown to be superior, but some think that
makes it fairer.
 
To define 'fairness' we need to define some metric to measure it.
Normally we can't get 'perfect' fairness except in vary special
conditions. Normally we define a system as 'fair' if every client can
get some reasonable amount of the resource (even if it isn't a 'fair
share'). Tighter definitions of 'fairness' might try to limit the
inequities in allocation (and if the clients are identical in needs, we
need ways to 'fairly' allocate needs to measure inequality). Generally,
attempts to make a system 'fairer' can often impact the efficiency of
the system. In my example, to reduce the variance in the wait times, the
average wait time doubled, and perhaps by increasing the number of
context switches we might have actually increased the total time to run
all the jobs.
 
My key point is that the first step is we need to define what we want to
meet as 'fair', and then we can decide if a given strategy will meet
that definition (and maybe how well).
 
Thus, if you want to call something 'unfair', you need to define what
you mean by fair. It is quite possible to define a RW-lock that will see
to it that as long as no locker fails to release its lock in a
reasonable time period, all requests WILL get met eventually. It just
requires that at some point after a write request gets queue, the system
stops granting new read requests (so the writer will get eventually get
in when all current readers end), and that a somewhat fair method (like
oldest) is used for granting requests when the system is free. Yes,
perhaps the simplest RW-lock will not be able to do this, but then the
issue isn't about RW-locks in general, but that style of
implementations. IF you are using some other definition of 'fair', you
need to define it.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 21 02:35PM -0800

On 2/21/2021 12:58 PM, Richard Damon wrote:
> issue isn't about RW-locks in general, but that style of
> implementations. IF you are using some other definition of 'fair', you
> need to define it.
 
I agree. Fwiw, my rwmutex does not allow readers to starve out writers
and vise versa. So, if a writer requests the lock while there are a shit
load of concurrent readers, it will wait until all of the readers in the
current "batch" are completed and will automatically own the lock for
write access after that. If a batch of readers come in while the writer
is doing its thing, they will be released all at once and automatically
own reader access once the writer is finished. This way, readers cannot
stave out writers and vise versa. Now, if one does not desire that type
of alternating, basically a classic bakery algorihtm like behavior, then
do NOT use my rwmutex.
 
https://vorbrodt.blog/2019/02/14/read-write-mutex
 
I like how he gave me proper credit. Nice. :^)
 
Actually, my rwmutex is pretty short and sweet with no loops.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 21 09:47PM -0800

On 2/21/2021 10:58 AM, Bonita Montero wrote:
>> it is clearly worse.
 
> You can't compare that to the discussed issue because there's no
> opportunity to capture all the CPU-time by one task.
 
Why not?
Bonita Montero <Bonita.Montero@gmail.com>: Feb 22 06:51AM +0100

>> You can't compare that to the discussed issue because there's no
>> opportunity to capture all the CPU-time by one task.
 
> Why not?
 
Not how he defined it.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 22 06:52AM +0100

> The key point isn't what resource was being managed, but how do we
> really define 'fair'. ...
 
But with an example that doesn't fit with the discussion.
Chris does know what I mean, you don't.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 22 06:54AM +0100

> I agree. Fwiw, my rwmutex does not allow readers to starve out writers
> and vise versa. ...
 
Building such a RW-mutex is impossible. You can't prevent a writer or
reader to hold the mutex as long as he wants.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 21 10:00PM -0800

On 2/21/2021 9:54 PM, Bonita Montero wrote:
>> and vise versa. ...
 
> Building such a RW-mutex is impossible. You can't prevent a writer or
> reader to hold the mutex as long as he wants.
 
True, you have again snipped out the relevant issue of classic
starvation I posted where frequent heavy read activity can stave out
writers, and vise versa. You are familiar with reader/writer priority on
rwmutex right? My impl alternates them in batches.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 21 10:01PM -0800

On 2/21/2021 9:51 PM, Bonita Montero wrote:
>>> opportunity to capture all the CPU-time by one task.
 
>> Why not?
 
> Not how he defined it.
 
Well, lets say five threads have read access... There is nothing
stopping them from using all cpus!
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 21 10:10PM -0800

On 2/21/2021 9:52 PM, Bonita Montero wrote:
>> really define 'fair'. ...
 
> But with an example that doesn't fit with the discussion.
> Chris does know what I mean, you don't.
 
I do know what you mean, its just not the classic way of thinking about
starvation. You are 100% correct that a batch of readers, or a single
writer can hold the lock for hours. It can happen, well, its not
advised, but it still can occur.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 22 07:40AM +0100

> starvation I posted where frequent heavy read activity can stave
> out writers, and vise versa. You are familiar with reader/writer
> priority on rwmutex right? My impl alternates them in batches.
 
That doesn't make fairness.
Bonita Montero <Bonita.Montero@gmail.com>: Feb 22 09:20AM +0100

> You are familiar with reader/writer priority on
> rwmutex right? My impl alternates them in batches.
 
That doesn't make fairness.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: