Tuesday, February 5, 2019

Digest for comp.lang.c++@googlegroups.com - 18 updates in 4 topics

mvorbrodt@gmail.com: Feb 05 11:24AM -0800

my blog entry about fast c++ semaphore implementation: http://blog.vorbrodt.me/?p=495
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 05 03:20PM -0800

> my blog entry about fast c++ semaphore implementation: http://blog.vorbrodt.me/?p=495
 
Thank you for your interest. The "fast_semaphore" algorithm has many
advantages over a traditional "slow_semaphore". Fwiw, Joe Seigh
developed this back on comp.programming.threads within one of my
discussions with him, many years ago. IIRC, around 2007. Damn, I am
getting older. The ability to skip a lot of calls into to the
"slow_semaphore" is very beneficial.
 
;^)
mvorbrodt@gmail.com: Feb 05 11:57AM -0800

I would like to implement an event class in C++: an object that can be signaled to wake up another thread(s), and upon signal it can stay signaled or reset itself to non signaled if it wakes up 1 thread. below is my attempt at implementing such a class using a condition variable. is this a viable approach?
 
#pragma once
 
#include <mutex>
#include <atomic>
#include <condition_variable>
 
class event
{
public:
void signal() noexcept
{
{
std::unique_lock<std::mutex> lock(m_mutex);
m_signaled = true;
}
m_cv.notify_all();
}
 
void wait(bool reset = false) noexcept
{
std::unique_lock<std::mutex> lock(m_mutex);
m_cv.wait(lock, [&](){ return m_signaled; });
if(reset) m_signaled = false;
}
 
void reset() noexcept
{
std::unique_lock<std::mutex> lock(m_mutex);
m_signaled = false;
}
 
private:
bool m_signaled = false;
std::mutex m_mutex;
std::condition_variable m_cv;
};
"Öö Tiib" <ootiib@hot.ee>: Feb 05 05:01AM -0800

On Sunday, 3 February 2019 15:50:38 UTC+2, Chris Ahlstrom wrote:
 
> That's about as much as I care about C++ on Windows. Not big fan of Microsoft
> "technology". Tends to be a will-o-the-wisp. At least Qt 5 seems to be
> <cough> stable.
 
Most big bonus of Qt5 (as compared to Qt4) is that
it runs on Android and iOS. Getting other platform-
agnostic stuff running on the phones and tablets
(like say SDL2) is way more painful. That is important
since desktops (and it doesn't matter if Windows,
Linux or OS X) are for armchair mushrooms, alive
people use handheld devices more than desktops.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 05 02:56PM +0100

On 05.02.2019 14:01, Öö Tiib wrote:
> since desktops (and it doesn't matter if Windows,
> Linux or OS X) are for armchair mushrooms, alive
> people use handheld devices more than desktops.
 
Isn't it impractical to develop on a phone?
 
I don't like typing on my phone, an Android thing.
 
Not to mention text selection and text operations in general.
 
 
Cheers!,
 
- Alf
Juha Nieminen <nospam@thanks.invalid>: Feb 05 03:03PM

> Isn't it impractical to develop on a phone?
 
> I don't like typing on my phone, an Android thing.
 
Phone applications aren't developed on the phone itself.
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 05 04:35PM +0100

On 05.02.2019 16:03, Juha Nieminen wrote:
>> Isn't it impractical to develop on a phone?
 
>> I don't like typing on my phone, an Android thing.
 
> Phone applications aren't developed on the phone itself.
 
But, then according the text I responded to, the developers are
"armchair mushrooms", and that can't be right either?
 
 
Cheers!
 
Alf (probably an armchair mushroom (at least he uses a chair, and every
time he sticks his head up it's cut off, just like a mushroom)).
jameskuyper@alumni.caltech.edu: Feb 05 07:54AM -0800

On Tuesday, February 5, 2019 at 8:01:22 AM UTC-5, Öö Tiib wrote:
...
> since desktops (and it doesn't matter if Windows,
> Linux or OS X) are for armchair mushrooms, alive
> people use handheld devices more than desktops.
 
My desktop computer is far easier to use than any of my smaller devices.
My external mouse and keyboard are easier to use than my laptop
machine's built-in devices, and far easier to use than those of my cell
phone. I don't have a tablet yet, but if and when I get one, I expect it
to present the same kinds of problems with input that my cell phone
does. My desktop also has an external display that is much larger than
the displays built into any of my other devices. Why would I have to be
an "armchair mushroom" to appreciate these advantages?
 
Note: you might suggest that I should connect my external mouse,
monitor, and keyboard to my laptop, and of course that does resolve the
issues I've mentioned above. However, my desktop machine is much smaller
than my laptop, has twice as many HDMI ports, twice as many Ethernet
ports, and four times as many USB ports, and an MMC port. It does not,
unfortunately, have a battery. In any situation that doesn't require a
battery, and which does allow me to use an external mouse, monitor,
and keyboard, why should I connect them to my laptop instead of my
desktop machine?
"Öö Tiib" <ootiib@hot.ee>: Feb 05 08:50AM -0800

On Tuesday, 5 February 2019 17:36:09 UTC+2, Alf P. Steinbach wrote:
 
> > Phone applications aren't developed on the phone itself.
 
> But, then according the text I responded to, the developers are
> "armchair mushrooms", and that can't be right either?
 
Part of my point was that we are. And we are tricky market, it is
awfully hard to sell anything to us. Lot of us use Vim or Emacs
despite there are plenty of free more sophisticated development
tools. OTOH when I try to develop by not typing (for example
using voice and/or gestures) then it looks insane from side and
also such alternative interfaces are slow and inconvenient.
Manfred <noname@add.invalid>: Feb 05 06:05PM +0100

On 2/5/2019 2:01 PM, Öö Tiib wrote:
> since desktops (and it doesn't matter if Windows,
> Linux or OS X) are for armchair mushrooms, alive
> people use handheld devices more than desktops.
 
Not me, when I walk down the street and I see all those droids walking
with their eyeballs fixed on their displays, "The Walking Dead" comes
immediately to mind.
Just another reply from an "armchair mushroom".
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 04 03:40PM -0800

On 2/4/2019 2:14 PM, Chris M. Thomasson wrote:
> is 100% perfectly standard compliant:
 
> https://groups.google.com/d/topic/comp.lang.c++/7u_rLgQe86k/discussion
 
> ;^)
 
I see where you are catching exceptions now, was assuming POD. Will try
to get this running in Relacy Race Detector. If there is a subtle
problem, it will find it. Also, emulation of exceptions can be
accomplished with a prng.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 04 04:02PM -0800

> My multi-threaded blocking queue implementation: https://blog.vorbrodt.me/?p=409
 
I noticed something strange in your semaphore::wait function. Your code is:
 
void wait()
{
std::unique_lock<std::mutex> lock(m_mutex);
m_cv.wait(lock, [&]{ return m_count > 0; });
--m_count;
}
 
I had to alter the predicate to the following function because your
original deadlocks as-is. I decided to focus attention to your semaphore
in Relacy. This version works, notice the difference wrt the predicate
in the wait function:
_______________________________
class semaphore
{
public:
semaphore(unsigned int count) : m_count(count) {}
//semaphore(const semaphore&&) = delete;
//semaphore(semaphore&&) = delete;
//semaphore& operator = (const semaphore&) = delete;
//semaphore& operator = (semaphore&&) = delete;
//~semaphore() = default;
 
void post()
{
//std::unique_lock<std::mutex> lock(m_mutex);
m_mutex.lock($);
++VAR(m_count);
m_cv.notify_one($);
m_mutex.unlock($);
}
 
void wait()
{
//std::unique_lock<std::mutex> lock(m_mutex);
//m_cv.wait(lock, [&] { return m_count > 0; });
 
m_mutex.lock($);
 
while (VAR(m_count) == 0)
{
m_cv.wait(m_mutex, $);
}
 
--VAR(m_count);
 
m_mutex.unlock($);
}
 
private:
std::mutex m_mutex;
std::condition_variable m_cv;
//unsigned int m_count;
VAR_T(unsigned int) m_count;
};
_______________________________
 
 
 
Fwiw, here is my entire Relacy unit test:
_______________________________
// Queue Test...
//_______________________________________________
 
 
//#define RL_DEBUGBREAK_ON_ASSERT
//#define RL_MSVC_OUTPUT
//#define RL_FORCE_SEQ_CST
//#define RL_GC
 
 
#include <relacy/relacy_std.hpp>
#include <iostream>
 
 
// Simple macro based redirection of the verbose std membars.
#define CT_MB_ACQ std::memory_order_acquire
#define CT_MB_REL std::memory_order_release
#define CT_MB_RLX std::memory_order_relaxed
#define CT_MB_ACQ_REL std::memory_order_acq_rel
#define CT_MB_SEQ_CST std::memory_order_seq_cst
#define CT_MB(mp_0) std::atomic_thread_fence(mp_0)
 
 
// Some global vars directing the show...
// PRODUCERS must equal CONSUMERS for this test
#define PRODUCERS 3
#define CONSUMERS 3
#define THREADS (PRODUCERS + CONSUMERS)
#define ITERS 5
 
 
 
 
class semaphore
{
public:
semaphore(unsigned int count) : m_count(count) {}
//semaphore(const semaphore&&) = delete;
//semaphore(semaphore&&) = delete;
//semaphore& operator = (const semaphore&) = delete;
//semaphore& operator = (semaphore&&) = delete;
//~semaphore() = default;
 
void post()
{
//std::unique_lock<std::mutex> lock(m_mutex);
m_mutex.lock($);
++VAR(m_count);
m_cv.notify_one($);
m_mutex.unlock($);
}
 
void wait()
{
//std::unique_lock<std::mutex> lock(m_mutex);
//m_cv.wait(lock, [&] { return m_count > 0; });
 
m_mutex.lock($);
 
while (VAR(m_count) == 0)
{
m_cv.wait(m_mutex, $);
}
 
--VAR(m_count);
 
m_mutex.unlock($);
}
 
private:
std::mutex m_mutex;
std::condition_variable m_cv;
//unsigned int m_count;
VAR_T(unsigned int) m_count;
};
 
 
 
 
 
 
// Relacy Stack Test...
struct ct_qtest_test
: rl::test_suite<ct_qtest_test, THREADS>
{
semaphore g_sem;
VAR_T(unsigned int) g_shared;
 
ct_qtest_test() : g_sem(1) {}
 
void before()
{
VAR(g_shared) = 0;
}
 
void after()
{
RL_ASSERT(VAR(g_shared) == 0);
}
 
 
void consumer(unsigned int tidx)
{
g_sem.wait();
--VAR(g_shared);
g_sem.post();
}
 
 
void producer(unsigned int tidx)
{
g_sem.wait();
++VAR(g_shared);
g_sem.post();
}
 
 
void thread(unsigned int tidx)
{
if (tidx < PRODUCERS)
{
producer(tidx);
}
 
else
{
consumer(tidx);
}
}
};
 
 
 
// Test away... Or fly? Humm...
int main()
{
{
rl::test_params p;
 
p.iteration_count = 10000000;
//p.execution_depth_limit = 33333;
//p.search_type = rl::sched_bound;
//p.search_type = rl::fair_full_search_scheduler_type;
//p.search_type = rl::fair_context_bound_scheduler_type;
 
rl::simulate<ct_qtest_test>(p);
}
 
return 0;
}
_______________________________
 
 
This works without any problems.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 04 04:25PM -0800

On 2/4/2019 4:02 PM, Chris M. Thomasson wrote:
>         m_cv.wait(lock, [&]{ return m_count > 0; });
>         --m_count;
>     }
[...]
 
Funny, the two conditions are in reverse, and basically work the same.
Relacy does not handle condvar waits with Lambdas. Sorry about that.
 
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Feb 04 05:11PM -0800

> My multi-threaded blocking queue implementation: https://blog.vorbrodt.me/?p=409
 
I finally implemented the whole thing in Relacy. It works fine. Fwiw,
here is my test unit:
__________________________
// Queue Test...
//_______________________________________________
 
 
//#define RL_DEBUGBREAK_ON_ASSERT
//#define RL_MSVC_OUTPUT
//#define RL_FORCE_SEQ_CST
//#define RL_GC
 
 
#include <relacy/relacy_std.hpp>
#include <iostream>
 
 
// Simple macro based redirection of the verbose std membars.
#define CT_MB_ACQ std::memory_order_acquire
#define CT_MB_REL std::memory_order_release
#define CT_MB_RLX std::memory_order_relaxed
#define CT_MB_ACQ_REL std::memory_order_acq_rel
#define CT_MB_SEQ_CST std::memory_order_seq_cst
#define CT_MB(mp_0) std::atomic_thread_fence(mp_0)
 
 
// Some global vars directing the show...
// PRODUCERS must equal CONSUMERS for this test
#define PRODUCERS 3
#define CONSUMERS 3
#define BUFFER 2
#define THREADS (PRODUCERS + CONSUMERS)
#define ITERS 7
 
 
 
 
class semaphore
{
public:
semaphore(unsigned int count) : m_count(count) {}
//semaphore(const semaphore&&) = delete;
//semaphore(semaphore&&) = delete;
//semaphore& operator = (const semaphore&) = delete;
//semaphore& operator = (semaphore&&) = delete;
//~semaphore() = default;
 
void post()
{
//std::unique_lock<std::mutex> lock(m_mutex);
m_mutex.lock($);
++VAR(m_count);
m_cv.notify_one($);
m_mutex.unlock($);
}
 
void wait()
{
//std::unique_lock<std::mutex> lock(m_mutex);
//m_cv.wait(lock, [&] { return m_count > 0; });
 
m_mutex.lock($);
 
while (VAR(m_count) == 0)
{
m_cv.wait(m_mutex, $);
}
 
--VAR(m_count);
 
m_mutex.unlock($);
}
 
private:
std::mutex m_mutex;
std::condition_variable m_cv;
//unsigned int m_count;
VAR_T(unsigned int) m_count;
};
 
 
template<typename T, std::size_t T_size>
class blocking_queue
{
 
public:
blocking_queue()
: m_pushIndex(0), m_popIndex(0), m_count(0),
m_openSlots(T_size), m_fullSlots(0) {}
 
//blocking_queue(const blocking_queue&) = delete;
//blocking_queue(blocking_queue&&) = delete;
//blocking_queue& operator = (const blocking_queue&) = delete;
//blocking_queue& operator = (blocking_queue&&) = delete;
 
/*
~blocking_queue()
{
while (m_count--)
{
m_data[m_popIndex].~T();
m_popIndex = ++m_popIndex % m_size;
}
operator delete(m_data);
}
*/
 
void push(const T& item)
{
m_openSlots.wait();
{
//std::lock_guard<std::mutex> lock(m_cs);
//new (m_data + m_pushIndex) T(item);
m_cs.lock($);
VAR(m_data[VAR(m_pushIndex)]) = item;
VAR(m_pushIndex) = ++VAR(m_pushIndex) % T_size;
++VAR(m_count);
m_cs.unlock($);
}
m_fullSlots.post();
}
 
void pop(T& item)
{
m_fullSlots.wait();
{
//std::lock_guard<std::mutex> lock(m_cs);
m_cs.lock($);
item = VAR(m_data[VAR(m_popIndex)]);
VAR(m_popIndex) = ++VAR(m_popIndex) % T_size;
--VAR(m_count);
m_cs.unlock($);
}
m_openSlots.post();
}
 
/*
bool empty()
{
std::lock_guard<std::mutex> lock(m_cs);
return m_count == 0;
}
*/
 
private:
//unsigned int m_size;
VAR_T(unsigned int) m_pushIndex;
VAR_T(unsigned int) m_popIndex;
VAR_T(unsigned int) m_count;
VAR_T(T) m_data[T_size];
 
semaphore m_openSlots;
semaphore m_fullSlots;
std::mutex m_cs;
};
 
 
// Relacy Stack Test...
struct ct_qtest_test
: rl::test_suite<ct_qtest_test, THREADS>
{
blocking_queue<unsigned int, BUFFER> g_queue;
 
void before()
{
 
}
 
void after()
{
 
}
 
 
void consumer(unsigned int tidx)
{
unsigned int data = 0;
 
for (unsigned int i = 0; i < ITERS; ++i)
{
g_queue.pop(data);
RL_ASSERT(data != tidx);
}
}
 
 
void producer(unsigned int tidx)
{
for (unsigned int i = 0; i < ITERS; ++i)
{
g_queue.push(tidx);
}
}
 
 
void thread(unsigned int tidx)
{
if (tidx < PRODUCERS)
{
producer(tidx);
}
 
else
{
consumer(tidx);
}
}
};
 
 
 
// Test away... Or fly? Humm...
int main()
{
{
rl::test_params p;
 
p.iteration_count = 10000000;
//p.execution_depth_limit = 33333;
//p.search_type = rl::sched_bound;
//p.search_type = rl::fair_full_search_scheduler_type;
//p.search_type = rl::fair_context_bound_scheduler_type;
 
rl::simulate<ct_qtest_test>(p);
}
 
return 0;
}
__________________________
 
 
A okay. :^)
Bonita Montero <Bonita.Montero@gmail.com>: Feb 05 08:41AM +0100

>> than a simple link-node in the queue.
 
> That means that the producer needs to use a lot more memory to make
> these data structures - ...
 
No, these data-structures are in memory anyway.
Just the link-node in the queue comes afterwards.
Juha Nieminen <nospam@thanks.invalid>: Feb 05 09:35AM

> fine using your current code. However, the data returned is dubious
> because the mutex m_cs was not locked where you mutate m_count. If empty
> is called a lot, then this type of racy check might be perfectly fine.
 
Making the variable atomic only makes sure that reading and writing the
variable itself concurrently won't result in garbage values. However, it
doesn't protect from other types of race conditions that might happen.
For example, a routine might read the variable and see that the
container is not empty, when in reality it is; it's just that the
routine that emptied the container was just about to zero the size
variable when that other thread read it.
 
If the code needs to make sure it doesn't get incorrect information
from that function, it needs to use the same mutex, to make sure
it's not reading the variable while another thread is modifying
the data structure.
 
All this can add quite a lot of overhead, but that's the eternal dilemma
with multithreading. (A lot of research has been put into developing
lock-free data containers, but that's a very hard problem.)
 
--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
David Brown <david.brown@hesbynett.no>: Feb 05 11:15AM +0100

On 05/02/2019 08:41, Bonita Montero wrote:
>> these data structures - ...
 
> No, these data-structures are in memory anyway.
> Just the link-node in the queue comes afterwards.
 
They will not be in memory if the producer has stopped producing!
Bonita Montero <Bonita.Montero@gmail.com>: Feb 05 11:46AM +0100

>> No, these data-structures are in memory anyway.
>> Just the link-node in the queue comes afterwards.
 
> They will not be in memory if the producer has stopped producing!
 
Its plainly idiotic not to hand off items to another threads because
of memory-issues; if there isn't a memory-collapse that stops further
processing it makes always sense to hand of the items to the consumer.
And if there is a memory-collapse, this will just stop the processing
of an item. No one develops the producer in a way that it will build
different data-structures for processing in the producer if a queue
is full. And this short-circuited processing would be stupid also
because it wouls stop the producer from processing input from other
sources for a while.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: