Wednesday, November 15, 2023

Digest for comp.lang.c++@googlegroups.com - 15 updates in 1 topic

Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 14 08:26PM -0500

Kaz Kylheku wrote:
>> stuffing the unblocked threads to the mutex waiting list in accordance
>> with some scheduling policy.
 
> Have you noticed how no mutex appears in the pthread_cond_signal API?
 
Of course but so what? Nothing prevents the implementation from listing
the temporarily released mutices of the waiting pthread_cond_wait
callers in the cond var where pthread_cond_signal can access them.
 
Or, possibly even better, keeping a pointer to them with the threads
waiting for the condvar (there can only be at most one such temporarily
released mutex per the waiting thread). And the threads waiting for the
condvar are accessible for the pthread_cond_signal according to the spec.
Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 14 08:35PM -0500

Scott Lurndal wrote:
>> the event.
 
> Clearly the application should have used a queue per priority in
> this case.
This has been an alternative for a while but traditionally RT is built
on the threads of different prios as opposed to the resources of
different prios.
 
There are reasons for this, too: one I remember is that the ordering
becomes better defined. (Just as an example, imagine that the events
posted on the queue are work permits of some kind. The policy is that
while higher-prio thread is waiting for a work permit, no lower-prio
thread shall get a permit).
Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 14 08:42PM -0500

Kaz Kylheku wrote:
 
> It's possible that only one thread is unblocked by a
> pthread_cond_signal. It takes at least two parties to contend for
> something.
 
My point was that the "unblocked" thread could be the thread that "won"
the contention. "as-if" there meant that the threads didn't really need
to be all woken up / unblocked and start contending; but the contention
could be resolved according to the scheduling policy (the same one that
would be applied if they were actually running and contending for the
mutex, without any waiting for the condvar) within the signal call.
Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 14 08:47PM -0500

Chris M. Thomasson wrote:
 
>> https://youtu.be/7YvAYIJSSZY
 
> A condvar that cannot signal/broadcast from outside of a held mutex is
> broken by default.
It can, it's just that the desired scheduling policy cannot be
*effectively* applied if a program signal outside of that mutex.
 
E.g. in some scenario a higher-prio thread could stay indefinitely
starved by the lower-prio thread, regardless of the thread library
implementation.
Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Nov 14 09:10PM -0500

Bonita Montero wrote:
> for the mutexe's binary semaphore and for the condvar's counting
> semaphore in one step with WaitForMultipleObjects. Unfortunately
> there's nothing under Linux like that.
 
Not true. A Linux program can wait for any number of eventfd semaphores
that can be used as either binary or a counting one.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 15 05:54AM +0100

Am 14.11.2023 um 21:27 schrieb Chris M. Thomasson:
 
>>> So, you are finished with any future on the fly corrections, right?
 
>> I'm currently not missing anything with the code.
 
> Okay, can you please make a new post that contains the finished code?
 
I already posted the finished code in two parts with this little
correction (missing "!"). The xhandle.h is still the same.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 15 05:57AM +0100

Am 14.11.2023 um 21:25 schrieb Chris M. Thomasson:
 
> Well, we have cmpxchg8b on 32-bit Intel...
 
That's what I also thought, i.e. you can make a 64 bit store with that.
But that's for sure much slower than the trick with the x87-FPU I've
shown.
Bonita Montero <Bonita.Montero@gmail.com>: Nov 15 06:07AM +0100

Am 15.11.2023 um 03:10 schrieb Pavel:
>> there's nothing under Linux like that.
 
> Not true. A Linux program can wait for any number of eventfd semaphores
> that can be used as either binary or a counting one.
 
An eventfd can be a semaphore, but only one semaphore and not a set
like SysV semaphores. You can wait for multiple semaphores with poll
or select, but a change on one semaphore releases the waiting thread
whereas with Win32 you can wait for all conditions to be met at once.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 15 12:07PM -0800

On 11/14/2023 2:50 PM, Scott Lurndal wrote:
 
> As BM pointed out, once SSE et al showed up, there was a way to
> do some 64-bit loads. Not sure they're architecturally
> required to be single-copy atomic, however.
 
Right! I am not sure about that either.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 15 12:08PM -0800

On 11/14/2023 8:57 PM, Bonita Montero wrote:
 
> That's what I also thought, i.e. you can make a 64 bit store with that.
> But that's for sure much slower than the trick with the x87-FPU I've
> shown.
 
Are you sure its 100% atomic for use in multi-thread sync algorithms?
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 15 12:11PM -0800

On 11/14/2023 5:26 PM, Pavel wrote:
> waiting  for the condvar (there can only be at most one such temporarily
> released mutex per the waiting thread). And the threads waiting for the
> condvar are accessible for the pthread_cond_signal according to the spec.
 
Just as long as the implementation works wrt calling
pthread_cond_signal/broadcast outside of the mutex. If not, it is broken.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 15 12:18PM -0800

On 11/14/2023 5:47 PM, Pavel wrote:
 
> E.g. in some scenario a higher-prio thread could stay indefinitely
> starved by the lower-prio thread, regardless of the thread library
> implementation.
 
Afaict, it is an implementation detail. Say, SCHED_FIFO, the impl shall
strive to honor this. If it has to use a bakery algorithm to do it, so
be it.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 15 12:20PM -0800

On 11/15/2023 12:18 PM, Chris M. Thomasson wrote:
 
> Afaict, it is an implementation detail. Say, SCHED_FIFO, the impl shall
> strive to honor this. If it has to use a bakery algorithm to do it, so
> be it.
 
Also, you as the programmer can choose to signal within a locked region.
No problem.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Nov 15 12:21PM -0800

On 11/14/2023 8:54 PM, Bonita Montero wrote:
 
>> Okay, can you please make a new post that contains the finished code?
 
> I already posted the finished code in two parts with this little
> correction (missing "!"). The xhandle.h is still the same.
 
Correction? Just post the 100% finished code in a brand new thread.
Kaz Kylheku <864-117-4973@kylheku.com>: Nov 15 09:26PM


> E.g. in some scenario a higher-prio thread could stay indefinitely
> starved by the lower-prio thread, regardless of the thread library
> implementation.
 
I don't see what difference it can actually make, other than in
situations when a thread receives a priority boost while holding a
mutex.
 
Only one of these is true: (1) either the pthread_cond_signal and
pthread_cond_broadcast functions transfer threads from waiting on the
condition to waiting on the mutex, or (2) they don't: threads are woken,
and have to execute an operation equivalent to pthread_mutex_lock.
 
(Though there is no mutex argument in the API, the thread is waiting
with respect to a particular mutex. Each thread waiting on the same
condition could nominate a different mutex. So for each thread waiting
on a condition, we know which mutex it released in doing so and can
transfer it to wait on the mutex.)
 
If the pthread_cond_signal function transfers a thread from the
condition to the mutex according to (1), then everything is cool with
regard to scheduling.
 
Whether we have this:
 
pthread_mutex_unlock(&m);
pthread_cond_signal(&c);
 
...
 
// re-acquire
pthread_mutex_lock(&m);
 
or this:
 
pthread_cond_signal(&c);
 
pthread_mutex_unlock(&m);
 
...
// re-acquire
pthread_mutex_lock(&m);
 
the most important thing is that the waiting threads are already
transferred to waiting on the mutex, before the signaling thread call
tries to re-acquire the mutex.
 
I.e. the woken threads always reach the mutex wait first, regardless of
these two orders of operations on the part of the signaling thread.
 
On the other hand, if the pthread_cond_signal operation just wakes up
the threads, such that they themselves have to call the equivalent of
pthread_mutex_lock to reacquire the mutex, we do not gain any special
guarantees in one order or the other. Under either order of the
statements, the signaling thread can snatch the mutex away from the
signaled threads, when reaching the pthread_mutex_lock call,
even if it is a low priority thread and the others are high.
 
Now with real-time priorities and mutexes that prevent priority
inversion, we could have the following situation:
 
While a low-priority thread holds a mutex, it is temporarily boosted to
the highest priority from among the set of threads waiting for that
mutex.
 
There we have a difference between signaling inside or outside, the
reason being that when the thread leaves the mutex to signal outside, it
loses the temporary priority boost. That could lead to an undue delay in
signaling the condition.
 
The statement in the POSIX standard about predictable scheduling
behavior is like recommending sleep calls to for obtaining more
predictable behavior in a program riddled with race conditions.
 
If I'm an implementor reading that statement, I cannot infer any
concrete requirements about what it is that I'm supposed to do in
pthread_cond_signal to bring about more predictable scheduling.
 
Specifications should give specific requirements, not vague opinions.
 
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @Kazinator@mstdn.ca
NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: