http://groups.google.com/group/comp.programming.threads?hl=en
comp.programming.threads@googlegroups.com
Today's topics:
* Would this work on even one platform? - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/de8a88ed7fa913d4?hl=en
* __asm__ cmpxchg8b/cmpxchg16b - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/791f2415da140c34?hl=en
==============================================================================
TOPIC: Would this work on even one platform?
http://groups.google.com/group/comp.programming.threads/t/de8a88ed7fa913d4?hl=en
==============================================================================
== 1 of 1 ==
Date: Wed, Nov 25 2009 11:25 am
From: "Chris M. Thomasson"
"Dmitriy Vyukov" <dvyukov@gmail.com> wrote in message
news:8cf1c9ff-fa83-4de1-8095-e27c64434e07@p19g2000vbq.googlegroups.com...
On Nov 22, 10:31 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > I modeled this senerio in Relacy:
> >
> > http://relacy.pastebin.com/f4b36dd98
> >
> > It cannot find a way to break it either.
> >
> > ;^)
> Note that one still have to use std::atomic for g_flag, even if actual
> "synchronization" is provided by mutexes. And if one have std::atomic,
> then there is a little sense in using mutexes for synchronization in
> this example.
> However, I think there is still benefit in coding with Relacy not
> exactly what you have in production. Because if Relacy finds some
> issue in a test, and then you manually verifies that the same issue
> applies to the original production code, well, it's a plus. Some
> issues may be masked. though.
Agreed.
> > I just mixed up your variant with the following variant
> > (that I tried to use by myself):
[...]
>
> Yes. I was wondering how to model this in Relacy:
>
> http://groups.google.com/group/comp.programming.threads/msg/3f8ef89bf...
>
> Humm... Would Relacy treat the per-thread mutex actions like:
> _______________________________________________________________
[...]
> _______________________________________________________________
>
> as being equivalent to:
> _______________________________________________________________
[...]
> _______________________________________________________________
>
> ?
>
> I don't think it will. Even though it would in practice, asymmetric
> synchronization schemes aside for a moment of course...
> Relacy will treat them differently.
> Mutex lock/unlock is basically the same as acquire/release operation
> on a variable.
I think using mutexs this way would be undefined behavior wrt the C++0x.
> Stand-alone fences are more powerful in some sense,
> because they do not bind to a particular variable. And at the same
> time stand-alone fences (except seq_cst) are weaker, because they do
> not establish total order over operations (as store/RMW on a
> particular atomic).
I personally like stand-alone fences because IMHO they are more flexible. I
can put them exactly where I need them; e.g.:
________________________________________________________________
#define N 10
static int g_values[N] = { NULL };
static std::atomic<bool> g_flags[N] = { NULL };
void thread_1()
{
for (size_t i = 0; i < N; ++i)
{
g_values[i] = 666;
}
std::atomic_thread_fence(std::memory_order_release);
for (size_t i = 0; i < N; ++i)
{
g_flags[i].store(true, std::memory_order_relaxed);
}
}
void other_threads()
{
retry:
for (size_t i = 0; i < N; ++i)
{
if (! g_flags[i].load(std::memory_order_relaxed))
{
spin();
goto retry;
}
}
std::atomic_thread_fence(std::memory_order_acquire);
for (size_t i = 0; i < N; ++i)
{
assert(g_values[i] == 666);
}
}
________________________________________________________________
As far as a total order over operations, I am not sure I understand what you
mean. I expect the following to be out of order:
________________________________________________________________
std::atomic<int> g_value1;
std::atomic<int> g_value2;
void foo()
{
g_value1.store(1, std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_release);
g_value2.load(std::memory_order_relaxed);
}
________________________________________________________________
but not this:
________________________________________________________________
std::atomic<int> g_value1;
std::atomic<int> g_value2;
void foo()
{
g_value1.store(1, std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_seq_cst);
g_value2.load(std::memory_order_relaxed);
}
________________________________________________________________
==============================================================================
TOPIC: __asm__ cmpxchg8b/cmpxchg16b
http://groups.google.com/group/comp.programming.threads/t/791f2415da140c34?hl=en
==============================================================================
== 1 of 1 ==
Date: Fri, Nov 27 2009 11:45 pm
From: "C Warwick"
"Chris M. Thomasson" <no@spam.invalid> wrote in message news:h7pkr2$1ed3
> void
> mpmcq_push(
> struct mpmcq* const self,
> struct node* node
> ) {
> struct node* prev;
> node->m_next = NULL;
> prev = ATOMIC_SWAP_RELAXED(&self->head, node); // #1
> ATOMIC_STORE_RELEASE(&prev->next, node); // #2
> }
Assuming I understand the ATOMIC____ functions right, wouldnt that basically
detach the two ends of the queue between lines #1 and #2? After #1 the head
points at the new node, but the last node in the queue doesnt join up yet.
So if a thread stalled between those two lines, consumers wouldnt be able to
get past that break, new items could be added, but they'd be inaccessable to
consumers until the stalled thread completed the join?
==============================================================================
You received this message because you are subscribed to the Google Groups "comp.programming.threads"
group.
To post to this group, visit http://groups.google.com/group/comp.programming.threads?hl=en
To unsubscribe from this group, send email to comp.programming.threads+unsubscribe@googlegroups.com
To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.programming.threads/subscribe?hl=en
To report abuse, send email explaining the problem to abuse@googlegroups.com
==============================================================================
Google Groups: http://groups.google.com/?hl=en
No comments:
Post a Comment