Thursday, January 1, 2009

comp.programming.threads - 12 new messages in 7 topics - digest

comp.programming.threads
http://groups.google.com/group/comp.programming.threads?hl=en

comp.programming.threads@googlegroups.com

Today's topics:

* Microsoft video: Software Transactional Memory: The Current State of the Art
- 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/b0e584b7be010f44?hl=en
* A scoped lock/unlock implementation in C++. - 6 messages, 4 authors
http://groups.google.com/group/comp.programming.threads/t/8262569acfbd7d01?hl=en
* FIFO threads - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/c8b17e5a73f0babf?hl=en
* Why are Boost thread mutexes so slow compared to Pthreads? - 1 messages, 1
author
http://groups.google.com/group/comp.programming.threads/t/9c9fd9b9ccafc16a?hl=en
* discount Air max 95 shoes www.king-trade.cn - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/7228642a20e7639e?hl=en
* Discount Coach Sandals, Dior Sandals, Prada Sandals, Chanel Sandals, - 1
messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/574afe12a2d90050?hl=en
* Call for Papers: The 2009 International Conference on Computer Graphics and
Virtual Reality (CGVR'09), USA, July 13-16, 2009 - 1 messages, 1 author
http://groups.google.com/group/comp.programming.threads/t/4515a1b06bd72998?hl=en

==============================================================================
TOPIC: Microsoft video: Software Transactional Memory: The Current State of
the Art
http://groups.google.com/group/comp.programming.threads/t/b0e584b7be010f44?hl=en
==============================================================================

== 1 of 1 ==
Date: Tues, Dec 30 2008 10:11 pm
From: kimo

http://tinyurl.com/8y2zr8

Software Transactional Memory: The Current State of the Art
Posted By: Charles | Dec 29th @ 12:19 PM

A few years ago I got the chance to learn about Software Transactional
Memory for the first time while visiting MSR Cambridge. The great
Simon Peyton-Jones and Tim Harris explained to me the thinking behind
STM and how it might evolve. It was a tremendously interesting
conversation. If you haven't watched that interview, I highly
recommend it as a precursor to this one. Today, STM is no longer only
a research project. The Parallel Computing Platform team is incubating
and extending the technology, finding that it may in fact work in the
real world...

Of course, there is no silver bullet to solving the Concurrency
Problem, but STM may be an important part of a larger solution (you've
leraned a great deal about what Microsoft is up to in the concurrency
and parallelism space here on Channel 9 and it should be somewhat
clear by now that many of the technologies we've presented to you may
end up as pieces of a broader solution...)

Here, STM Program Manager Dana Groff and STM Principal Developer Lead
Yossi Levanoni discuss the current state of STM and outline the work
their team is doing to craft this incubation/research technology into
a practical real-world solution (STM is not available yet for
experimentation. It's in incubation. It's not known if or when STM
will become a viable product.). So, how has STM evolved over the past
two years, anyway? Tune in.

==============================================================================
TOPIC: A scoped lock/unlock implementation in C++.
http://groups.google.com/group/comp.programming.threads/t/8262569acfbd7d01?hl=en
==============================================================================

== 1 of 6 ==
Date: Wed, Dec 31 2008 2:13 am
From: JC


A recent thread about scope-based locking and scope-based unlocking
got me thinking about an implementation that could get around some of
problems discussed in that thread.

Define a "scoped lock" as an object that acquires a lock on a mutex at
initialization and releases the lock when it goes out of scope, and a
"scoped unlock" as an object that releases a lock on a mutex at
initialization and acquires the lock when it goes out of scope. Note
that these definitions do not require the lock to be held the entire
time a "scoped lock" is in scope, nor do they require that no lock is
held the entire time an "unscoped lock" is in scope (this was a point
of debate in the aforementioned thread).

The specific problem this addresses is nested scoped locks and
unlocks. For example, in the following situation:

{
scoped_lock locker(g_mutex);
{
scoped_lock locker(g_mutex);
{
scoped_unlock unlocker(g_mutex);
// <-- if g_mutex supports recursive locks, then
// it actually remains locked here.
}
}
}

The implementation below guarantees that "unlocker" will release the
lock on the mutex.

The implementation also provides debugging asserts to flag various
mistakes. Comments are in the code below.

One issue that was brought up is the locking/unlocking of the mutex
during stack unwinding after an exception is thrown. This
implementation does not "solve" that problem. That problem is, in
fact, unavoidable, and present in any scope-based locking scheme. The
reason it is unavoidable is because inner scopes do not have knowledge
of how outer scopes will handle exceptions, and outer scopes must be
able to make correct assumptions about the state of the mutex when
handling exceptions. Therefore, "optimizations" such as skipping a
transient lock during stack unwinding are simply not possible without
breaking assumptions about the mutex's state along the way.

Here is the implementation I came up with. It defines the concept of
locked and unlocked "regions". It uses an std::vector<int> to track
the nesting level of regions. It also provides "unscoped" locking and
unlocking for completeness. In the code below, a simple benchmark of
these two loops:

static const int ITERS = 100000;
mutex m;
mutex_locker ml(m);
int n;

for (n = 0; n < ITERS; ++ n) {
m.lock();
m.unlock();
}

for (n = 0; n < ITERS; ++ n) {
ml.enter_locked();
ml.leave_locked();
}

Yielded the following times when compiled with MSVC 2008 and run on a
2.16GHz Intel T2600 Core Duo running Windows XP SP3:

Release Mode:
mutex: 3.7393 ms / 100000 iters.
mutex_locker: 5.6954 ms / 100000 iters.

Debug Mode:
mutex: 15.7196 ms / 100000 iters.
mutex_locker: 1000.9 ms / 100000 iters.

The performance of the mutex_locker in release mode is worse than the
mutex, but may be acceptable for many applications. The performance of
the mutex_locker in debug mode is significantly worse, due mainly to
debug assertions in the code. Improvements to release mode performance
are welcome. Note that the use of an std::list rather than an
std::vector increased the release mode mutex_locker time to approx.
100 ms per 100000 iterations.

Here is the code. However, as I suspect Google Groups will break the
lines in awkward places, I have also placed the code here:

http://pastebin.com/f6cc3ac1a

6 classes are provided:

mutex - A basic mutex.
mutex_locker - Provides lock/unlock "region" functionality.
scoped_lock, scoped_unlock - Scope-based region management.
unscoped_lock, unscoped_unlock - Provided for completeness.

Details are in source comments. Suggestions/criticism welcome. This
was mostly done as an exercise. Note that while I put pthreads support
in there, I did not actually try to compile it on a POSIX system --
apologies for any mistakes.


Jason


#define USE_WIN_ASSERT 1
#define USE_STD_ASSERT 2
#define USE_WIN_THREADS 1
#define USE_POSIX_THREADS 2

#if defined(_WIN32) && !defined(__CYGWIN__)
# define WIN32_LEAN_AND_MEAN
# include <windows.h>
# define USE_ASSERT USE_WIN_ASSERT
# define USE_THREADS USE_WIN_THREADS
#else
// Assume pthreads is present if we're not on Windows. Of course, this
will not
// always be correct. Adjust as needed.
# define USE_ASSERT USE_STD_ASSERT
# define USE_THREADS USE_POSIX_THREADS

No comments: