http://groups.google.com/group/comp.programming.threads?hl=en
comp.programming.threads@googlegroups.com
Today's topics:
* strong atomic reference counting... - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/1b35d9cff58efb83?hl=en
* ftok - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/4080bc1e3e027ae0?hl=en
* amortizing read-access in read/write mutex... - 9 messages, 2 authors
http://groups.google.com/group/comp.programming.threads/t/7b92af0a72a360ba?hl=en
==============================================================================
TOPIC: strong atomic reference counting...
http://groups.google.com/group/comp.programming.threads/t/1b35d9cff58efb83?hl=en
==============================================================================
== 1 of 1 ==
Date: Sat, Feb 6 2010 8:11 am
From: "Chris M. Thomasson"
Ummm, my bastard news server has been fuc%ing up lately!!! GRRRRR.
Sorry about that.
==============================================================================
TOPIC: ftok
http://groups.google.com/group/comp.programming.threads/t/4080bc1e3e027ae0?hl=en
==============================================================================
== 1 of 1 ==
Date: Sun, Feb 7 2010 12:25 am
From: Karthik Balaguru
On Feb 7, 1:23 pm, Karthik Balaguru <karthikbalagur...@gmail.com>
wrote:
> On Feb 2, 2:30 am, Torsten Robitzki <MyFirstn...@robitzki.de> wrote:
>
> > karthikbalaguru schrieb:
>
> > > To be clear -
> > > What does each letter 'f' , 't' , 'o' ,'k'
> > > refer to ? Any ideas ?
>
> > I would guess "File TO Key".
>
> Okay, i will go with "File TO Key"
>
> Strange that there is no place
> of mention of expansion for
> ftok w.r.t IPC in internet !!
>
> I landed up in something like
> 'Functional Test OK' etc etc.
> But, not for ftok w.r.t IPC.
>
Thx,
Karthik Balaguru
==============================================================================
TOPIC: amortizing read-access in read/write mutex...
http://groups.google.com/group/comp.programming.threads/t/7b92af0a72a360ba?hl=en
==============================================================================
== 1 of 9 ==
Date: Sun, Feb 7 2010 3:43 am
From: David Schwartz
On Feb 6, 7:32 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> What about a pattern that knows it will be hit with a constant stream of
> read requests and may experience periods of rather "rapid" writer activity
> that happens to occur on a non-deterministic basis. IMVHO, a read/write
> mutex should "allow" readers in during the bursts of writer activity?
I cannot imagine why. I've used reader/writer locks for an awfully
long time and never wanted anything but strict writer priority.
However, I have no problem with any implementation that guarantees
writers aren't starved in the face of large numbers of readers. Phase
alternation is fine with me, though it seems like needless extra
effort.
DS
== 2 of 9 ==
Date: Sun, Feb 7 2010 4:50 am
From: "Chris M. Thomasson"
"David Schwartz" <davids@webmaster.com> wrote in message
news:99680d11-fa70-4a91-b46a-5dfb2b31f133@q2g2000pre.googlegroups.com...
On Feb 6, 7:32 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > What about a pattern that knows it will be hit with a constant stream of
> > read requests and may experience periods of rather "rapid" writer
> > activity
> > that happens to occur on a non-deterministic basis. IMVHO, a read/write
> > mutex should "allow" readers in during the bursts of writer activity?
> I cannot imagine why.
Humm; you got me there. :^)
Well, I can only think of a __very_contrived__ example: What about a system
that has very frequent read activity, and the writes are fairly frequent as
well. Say, you have an average of 100,000 reads, and around 6,000-15,000
writes per-second. Now, some programmer uses a single rw-mutex to protect
the whole thing! Should his program livelock wrt readers, or be actually be
allowed to progress?
> I've used reader/writer locks for an awfully
> long time and never wanted anything but strict writer priority.
> However, I have no problem with any implementation that guarantees
> writers aren't starved in the face of large numbers of readers. Phase
> alternation is fine with me, though it seems like needless extra
> effort.
Actually, the implementation I can up with is fairly simple. Have you taken
a look at it? The funny thing is that strict fairness was not a goal back
when I was designing the rw-mutex, it manifested as a result of the
synchronization algorithm I settled on; I did not consider it to be a flaw.
I like the algorithm because it's implementation contains no loops which is
pretty cool, IMHO of course!
;^)
== 3 of 9 ==
Date: Sun, Feb 7 2010 5:38 am
From: David Schwartz
On Feb 7, 4:50 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> Well, I can only think of a __very_contrived__ example: What about a system
> that has very frequent read activity, and the writes are fairly frequent as
> well. Say, you have an average of 100,000 reads, and around 6,000-15,000
> writes per-second. Now, some programmer uses a single rw-mutex to protect
> the whole thing! Should his program livelock wrt readers, or be actually be
> allowed to progress?
The program will progress as each writer will make progress. If the
writers were not making forward progress, why do they exist? At least
in POSIX-land, the only guarantee any of the typical synchronization
primitives make is that your process as a whole will make forward
progress.
I would argue that if strict writer priority causes you a problem,
then a reader/writer lock is the wrong primitive. i would also argue
that a reader/writer lock is the wrong primitive if you expect the
"read lock" operation to block more than a very small percentage of
the time. The fast paths are the read lock/unlock operation with no
writers.
Perhaps you can make a case for some special-purpose read/write lock
that handles other cases, but this is the typical one. Blocking should
be rare or you're doing it wrong.
DS
== 4 of 9 ==
Date: Sun, Feb 7 2010 6:35 am
From: "Chris M. Thomasson"
"David Schwartz" <davids@webmaster.com> wrote in message
news:305c4fdf-7f5d-4e92-b573-f41e10e04cc9@b9g2000pri.googlegroups.com...
On Feb 7, 4:50 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > Well, I can only think of a __very_contrived__ example: What about a
> > system
> > that has very frequent read activity, and the writes are fairly frequent
> > as
> > well. Say, you have an average of 100,000 reads, and around 6,000-15,000
> > writes per-second. Now, some programmer uses a single rw-mutex to
> > protect
> > the whole thing! Should his program livelock wrt readers, or be actually
> > be
> > allowed to progress?
My answers will be within the context of using a read/write mutex with
strict writer priority in the really contrived example I gave above.
> The program will progress as each writer will make progress. If the
> writers were not making forward progress, why do they exist?
The writers can definitely make forward progress. However, a reader can
stall and not make any progress if the "writer count" inside the rw-mutex
implementation does not drop to zero, and stay zero "long enough" for any
readers to actually acquire read access. I think you can get starvation if
readers will not even try to access the critical-section if the write count
is not zero.
> At least
> in POSIX-land, the only guarantee any of the typical synchronization
> primitives make is that your process as a whole will make forward
> progress.
I think that we are both making explicit scheduling decisions at a "higher
level" than the OS. For instance, my rw-scheduler is 100% fair such that it
strictly alternates between reads and writes while your rw-scheduler gives
100% writer priority because it blocks readers if there is an active writer
or any pending writers. For example, if 4 threads are waiting for write
access, all new readers will be blocked until all 4 of those threads are
finished. If new writers come in and wait before those 4 threads are
finished, then the readers will need to wait for them as well. If a waiter
leaves and comes back before the remaining 3 threads finish, then the
readers will have to wait for it all over again.
> I would argue that if strict writer priority causes you a problem,
> then a reader/writer lock is the wrong primitive. i would also argue
> that a reader/writer lock is the wrong primitive if you expect the
> "read lock" operation to block more than a very small percentage of
> the time. The fast paths are the read lock/unlock operation with no
> writers.
>
> Perhaps you can make a case for some special-purpose read/write lock
> that handles other cases, but this is the typical one. Blocking should
> be rare or you're doing it wrong.
Would you say that the load generated by the contrived case is way too much
pressure for a general purpose read/write mutex to handle? IMVHO, I want my
rw-mutex to perform great in "sane" usage cases. However, I also want it to
guarantee forward progress for both reads and writes if a programmer happens
to use it to solve a high contention scenario with a high read-to-write
ratio.
== 5 of 9 ==
Date: Sun, Feb 7 2010 7:10 am
From: David Schwartz
On Feb 7, 6:35 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> Would you say that the load generated by the contrived case is way too much
> pressure for a general purpose read/write mutex to handle?
No, I'd say it's fine. And I'd also say, in that context, that the
writers making forward progress means the process is making forward
progress.
> IMVHO, I want my
> rw-mutex to perform great in "sane" usage cases. However, I also want it to
> guarantee forward progress for both reads and writes if a programmer happens
> to use it to solve a high contention scenario with a high read-to-write
> ratio.
I don't think that's worth slowing the fast paths down. If you can do
it without slowing the fast paths, then go for it. Otherwise, I think
it's a questionable idea. Reader/writer mutexes are appropriate in
cases where reads significantly outnumber writes, overall write demand
is low, and blocking of any thread is the exception rather than the
rule.
DS
== 6 of 9 ==
Date: Sun, Feb 7 2010 7:37 am
From: "Chris M. Thomasson"
"David Schwartz" <davids@webmaster.com> wrote in message
news:2c95e72e-2b4c-49db-a623-3111a1bb8be3@o16g2000prh.googlegroups.com...
On Feb 7, 6:35 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > Would you say that the load generated by the contrived case is way too
> > much
> > pressure for a general purpose read/write mutex to handle?
> No, I'd say it's fine. And I'd also say, in that context, that the
> writers making forward progress means the process is making forward
> progress.
IMHO, I would have to say that the writer threads within the process are all
making forward progress. Therefore, I say that a "portion" of the process is
making forward progress.
> > IMVHO, I want my
> > rw-mutex to perform great in "sane" usage cases. However, I also want it
> > to
> > guarantee forward progress for both reads and writes if a programmer
> > happens
> > to use it to solve a high contention scenario with a high read-to-write
> > ratio.
> I don't think that's worth slowing the fast paths down. If you can do
> it without slowing the fast paths, then go for it.
You should check out the implementation I did:
http://software.intel.com/en-us/forums/showthread.php?t=65822
(read all...)
Here is what the read lock/unlock functions look like:
____________________________________________________________
void rdlock() throw() {
assert(m_count <= LONG_MAX);
LONG count = InterlockedDecrement(&m_count);
if (count < 0) {
if (WaitForSingleObject(m_rdwset,
INFINITE) != WAIT_OBJECT_0) {
RWMUTEX_WIN_UNEXPECTED();
}
}
}
void rdunlock() throw() {
assert(m_count < LONG_MAX);
LONG count = InterlockedIncrement(&m_count);
if (count < 1) {
if (! InterlockedDecrement(&m_rdwake)) {
if (! SetEvent(m_wrwset)) {
RWMUTEX_WIN_UNEXPECTED();
}
}
}
}
____________________________________________________________
And here is the writer lock/unlock:
____________________________________________________________
void wrunlock() throw() {
assert(m_count < 1);
LONG count = InterlockedExchangeAdd(&m_count, LONG_MAX);
if (count < 0) {
if (! ReleaseSemaphore(m_rdwset, -count, 0)) {
RWMUTEX_WIN_UNEXPECTED();
}
}
if (InterlockedIncrement(&m_mtx) < 1) {
if (! ReleaseSemaphore(m_mtxwset, 1, NULL)) {
RWMUTEX_WIN_UNEXPECTED();
}
}
}
void wrtord() throw() {
assert(m_count < 1);
LONG count = InterlockedExchangeAdd(&m_count, LONG_MAX - 1);
if (count < 0) {
if (! ReleaseSemaphore(m_rdwset, -count, 0)) {
RWMUTEX_WIN_UNEXPECTED();
}
}
if (InterlockedIncrement(&m_mtx) < 1) {
if (! ReleaseSemaphore(m_mtxwset, 1, NULL)) {
RWMUTEX_WIN_UNEXPECTED();
}
}
}
____________________________________________________________
> Otherwise, I think it's a questionable idea.
I can enforce strict fairness without slowing any fast-path. I also avoid
any using any loops in the algorithm. IMVHO, that's a pretty nice.
> Reader/writer mutexes are appropriate in
> cases where reads significantly outnumber writes, overall write demand
> is low, and blocking of any thread is the exception rather than the
> rule.
== 7 of 9 ==
Date: Sun, Feb 7 2010 8:41 am
From: David Schwartz
On Feb 7, 7:37 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> I can enforce strict fairness without slowing any fast-path. I also avoid
> any using any loops in the algorithm. IMVHO, that's a pretty nice.
You have no loops because you've pushed them down a layer. For
example, your code calls WaitForSingleObject or sem_wait which contain
loops.
Your slow paths are slower than mine, but it doesn't look like you
particularly tried to optimize them. (Nor did I optimize mine.)
It's interesting how different our approaches are. I like the fact
that you managed to get some reader/writer fairness without slowing
the fast paths.
DS
== 8 of 9 ==
Date: Sun, Feb 7 2010 9:55 am
From: "Chris M. Thomasson"
"Chris M. Thomasson" <no@spam.invalid> wrote in message
news:G6Bbn.19306$3n2.12934@newsfe01.iad...
> "David Schwartz" <davids@webmaster.com> wrote in message
> news:2c95e72e-2b4c-49db-a623-3111a1bb8be3@o16g2000prh.googlegroups.com...
[...]
>> I don't think that's worth slowing the fast paths down. If you can do
>> it without slowing the fast paths, then go for it.
>
> You should check out the implementation I did:
>
>
> http://software.intel.com/en-us/forums/showthread.php?t=65822
> (read all...)
[...]
>
> And here is the writer lock/unlock:
[...]
// OOPS! That was the writer unlock and writer-to-reader downgrade
procedures! Here is the missing writer lock procedure:
____________________________________________________________
void wrlock() throw() {
if (InterlockedDecrement(&m_mtx) < 0) {
if (WaitForSingleObject(m_mtxwset,
INFINITE) != WAIT_OBJECT_0) {
RWMUTEX_WIN_UNEXPECTED();
}
}
assert(m_count > -1);
LONG count = InterlockedExchangeAdd(&m_count, -LONG_MAX);
if (count < LONG_MAX) {
if (InterlockedExchangeAdd(&m_rdwake,
LONG_MAX - count) + LONG_MAX - count) {
if (WaitForSingleObject(m_wrwset,
INFINITE) != WAIT_OBJECT_0) {
RWMUTEX_WIN_UNEXPECTED();
}
}
}
}
____________________________________________________________
> ____________________________________________________________
> void wrunlock() throw() {
> assert(m_count < 1);
> LONG count = InterlockedExchangeAdd(&m_count, LONG_MAX);
> if (count < 0) {
> if (! ReleaseSemaphore(m_rdwset, -count, 0)) {
> RWMUTEX_WIN_UNEXPECTED();
> }
> }
> if (InterlockedIncrement(&m_mtx) < 1) {
> if (! ReleaseSemaphore(m_mtxwset, 1, NULL)) {
> RWMUTEX_WIN_UNEXPECTED();
> }
> }
> }
[...]
> ____________________________________________________________
[...]
== 9 of 9 ==
Date: Sun, Feb 7 2010 10:26 am
From: "Chris M. Thomasson"
"David Schwartz" <davids@webmaster.com> wrote in message
news:b3f8bb29-4889-45ab-986d-6cdd939d2f90@b9g2000pri.googlegroups.com...
On Feb 7, 7:37 am, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > I can enforce strict fairness without slowing any fast-path. I also
> > avoid
> > any using any loops in the algorithm. IMVHO, that's a pretty nice.
> You have no loops because you've pushed them down a layer. For
> example, your code calls WaitForSingleObject or sem_wait which contain
> loops.
Fair enough.
> Your slow paths are slower than mine, but it doesn't look like you
> particularly tried to optimize them. (Nor did I optimize mine.)
I just defer all of the actual blocking to native semaphores and rely on the
fact that the synchronization algorithm itself provides certain "guarantees"
when a thread returns from a semaphore wait. A thread that returns from a
native wait does not need to read any state because it already "knows" its
state.
Also, keep in mind that the wait operations will only be called one single
time per-slowpath because there is no looping. So, that "can" be more
"efficient" because if you have to loop around you may end up calling a
native wait operation more than one time.
> It's interesting how different our approaches are.
Indeed! :^)
> I like the fact
> that you managed to get some reader/writer fairness without slowing
> the fast paths.
Yeah. The fairness is just sort of an inherent aspect of the algorithm.
==============================================================================
You received this message because you are subscribed to the Google Groups "comp.programming.threads"
group.
To post to this group, visit http://groups.google.com/group/comp.programming.threads?hl=en
To unsubscribe from this group, send email to comp.programming.threads+unsubscribe@googlegroups.com
To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.programming.threads/subscribe?hl=en
To report abuse, send email explaining the problem to abuse@googlegroups.com
==============================================================================
Google Groups: http://groups.google.com/?hl=en
No comments:
Post a Comment