- Fwiw, a thread-safe c++ deque example... - 12 Updates
- ,, - 2 Updates
- The Lord calls out to you for repentance, salvation - 2 Updates
- One way to get strong exception guarantee for a multithreaded pop: a Dequeued_ class - 3 Updates
- Still no inheritiance for enums? - 3 Updates
- lock-free single-producer/multi-consumer unbounded queue... - 3 Updates
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 21 12:43AM +0100 On 20-Mar-17 2:13 PM, Chris Vine wrote: > to splice, which only swaps some pointers). I say "generic" because if > you know T will be a fundamental type or an aggregate of fundamental > types, a deque would be better. That's an interesting advantage of `std::list`. I was sure that no practically relevant advantage was left after C++11. > copy assignment operator) might throw. So it is a matter of design > whether you want to de-queue using splicing also, or offer that as an > option. Worth noting that std::vector, instead of prohibiting a non-copyable item type with possibly throwing move operation, allows it and then doesn't offer any guarantee. Let's define the strong exception guarantee for a pop-like operation, whether it returns the item via function result or as out-argument, as * either client code has a logical copy of the former front item, and the front of the queue has been popped, or * an exception was thrown and the queue is unchanged. The C++03 standard library sought to /avoid/ providing this guarantee by separating the value copying, `front()`, from the actual popping, `pop_front()`. With no combined operation the queue itself was exception safe, providing the strong guarantee for each of its operations. And by just being somewhat awkward, with a separate `pop_front()` call after acquiring the value, the client code was safe. So the question is, how can that after-copy call be automated? A natural way would be via a destructor. But doing that directly would rely on detecting whether an exception was thrown by the (logical) copying. Given that a combined pop operation can be called during a stack unwinding, the only way I can think of to detect a copy exception is to return not the value/object itself, but a double indirection, which really complicates things relative to Chris M.thomasson's example: <code> #include <cstdio> #include <deque> #include <condition_variable> #include <mutex> #include <memory> #include <thread> #include <utility> // std::forward, std::move #include <algorithm> #include <cassert> template< class Item, class Queue > class Queue_popper_; template< class Item, class Queue_popper > class Queue_front_item_ { template< class I, class Q > friend class Queue_popper_; private: Queue_popper* p_popper_; Queue_front_item_( Queue_front_item_ const& ) = delete; public: Item item; ~Queue_front_item_() { p_popper_->on_object_destroyed(); } Queue_front_item_( Queue_front_item_&& other ) : p_popper_( other.p_popper_ ) , item{ std::move( other.item ) } { p_popper_->on_object_created(); } private: template< class... Args > Queue_front_item_( Queue_popper& popper, Args&&... args ) : p_popper_{ &popper } , item{ std::forward<Args>( args )... } { p_popper_->on_object_created(); } }; template< class Item, class Queue > class Queue_popper_ { private: using Lock = std::unique_lock<std::mutex>; Queue* p_queue_ = nullptr; int n_item_calls_ = 0; int n_created_objects_ = 0; Lock lock_; Queue_popper_( Queue_popper_ const& ) = delete; public: void on_object_created() { ++n_created_objects_; } void on_object_destroyed() { --n_created_objects_; } auto item() -> Queue_front_item_<Item, Queue_popper_> { assert( n_item_calls_ == 0 ); ++n_item_calls_; if( noexcept( Item{ std::move( p_queue_->front() ) } ) ) { return { *this, std::move( p_queue_->front() ) }; } else { return { *this, p_queue_->front() }; } } ~Queue_popper_() { assert( 0 <= n_created_objects_ and n_created_objects_ <= 1 ); if( n_item_calls_ == 0 or n_created_objects_ == 1 ) { p_queue_->pop_front(); } } Queue_popper_( Queue& q, Lock&& lock ) : p_queue_{ &q } , lock_( move( lock ) ) {} Queue_popper_( Queue_popper_&& other ) noexcept : p_queue_{ exchange( nullptr, other.p_queue_ ) } , n_item_calls_{ exchange( 0, other.n_item_calls_ ) } , n_created_objects_{ exchange( 0, other.n_created_objects_ ) } , lock_{ move( other.lock_ ) } {} }; template<typename T> struct queue { typedef std::deque<T> raw_queue_t; raw_queue_t m_queue; std::condition_variable m_cond; std::mutex m_mutex; void push(T const& obj) { { std::unique_lock<std::mutex> lock(m_mutex); m_queue.push_back(obj); } m_cond.notify_one(); } auto pop() -> Queue_popper_<T, std::deque<T>> { std::unique_lock<std::mutex> lock(m_mutex); while (! m_queue.size()) m_cond.wait(lock); return { m_queue, move( lock ) }; } }; typedef queue<unsigned int> string_queue_t; #define CONSUMERS 5 #define N 1000 void producer(std::mutex& std_out_mtx, string_queue_t& queue) { { std::unique_lock<std::mutex> lock(std_out_mtx); std::printf("producer::queue::(%p) - enter\n", (void*)&queue); } for (unsigned int i = 0; i < N; ++i) { queue.push(i + 1); std::this_thread::yield(); // just for some spice } for (unsigned int i = 0; i < CONSUMERS; ++i) { queue.push(0); std::this_thread::yield(); // just for some spice } { std::unique_lock<std::mutex> lock(std_out_mtx); std::printf("producer::queue::(%p) - exit\n", (void*)&queue); } } void consumer(unsigned int id, std::mutex& std_out_mtx, string_queue_t& queue) { { std::unique_lock<std::mutex> lock(std_out_mtx); std::printf("consumer(%u)::queue::(%p) - enter\n", id, (void*)&queue); } unsigned int prev = 0; for (;;) { auto const msg = queue.pop().item(); { std::unique_lock<std::mutex> lock(std_out_mtx); std::printf("consumer(%u)::msg::(%u)\n", id, msg.item ); } if (msg.item == 0) break; assert(msg.item > prev); // validate fifo nature prev = msg.item; } { std::unique_lock<std::mutex> lock(std_out_mtx); std::printf("consumer::queue::(%p) - exit\n", (void*)&queue); } } int main() { string_queue_t queue; std::mutex std_out_mutex; std::thread consumers[CONSUMERS]; for (unsigned int i = 0; i < CONSUMERS; ++i) { consumers[i] = std::thread( consumer, i, std::ref(std_out_mutex), std::ref(queue) ); } std::thread producer_thread( producer, std::ref(std_out_mutex), std::ref(queue) ); producer_thread.join(); for (unsigned int i = 0; i < CONSUMERS; ++i) { consumers[i].join(); } } </code> I'm not even entirely sure that that strong exception guarantee logic is correct, but I've verified that this modified-to-be-safe-for-general-item-type version produces exactly as many bytes of output as original. > earlier times strong exception safety would have been dealt with by > pop() taking a T& out parameter so the popped object is not copied or > moved twice. I think a more reasonable approach is to /require/ that a move operation on the item type will be no-throwing. Not that it necessarily actually moves, but just that it doesn't throw. > You might well want rate-limiting on a queue of this kind to prevent > producers overrunning consumers. That introduces all sorts of other > trade-offs. Cheers!, - Alf |
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 21 12:51AM +0100 On 21-Mar-17 12:43 AM, Alf P. Steinbach wrote: > correct, but I've verified that this > modified-to-be-safe-for-general-item-type version produces exactly as > many bytes of output as original. Well, just looking at it now that I posted it, I see that this, ~Queue_front_item_() { p_popper_->on_object_destroyed(); } will in general by using a dangling pointer. So even more complication is needed, that the popper nulls that pointer, and that the destructor above checks it. Apparently exception safety for the value-returning pop() is ugly! But maybe someone has a bright idea. Maybe. :) Cheers!, - Alf |
| Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Mar 21 01:00AM On Tue, 21 Mar 2017 00:43:52 +0100 > That's an interesting advantage of `std::list`. > I was sure that no practically relevant advantage was left after > C++11. Yes, it is a kind of a poor man's lock free. It isn't strictly lock free (it uses a mutex), but it meets the guarantee of always proceeding, because it only operates on pointer swaps which can't block except transiently by contention. > Worth noting that std::vector, instead of prohibiting a non-copyable > item type with possibly throwing move operation, allows it and then > doesn't offer any guarantee. If you splice on popping a list you will always get basic safety, but not necessarily strong safety - you can lose a queue item but the queue is left in a valid state. > `pop_front()` call after acquiring the value, the client code was > safe. > So the question is, how can that after-copy call be automated? On a list, I think the easiest thing is, as you suggest further below, to require a move not to throw, so side-stepping the problem. If that is not good enough, then I suspect it is back to a pop_front() and so deallocate defunct queue items under the mutex (not quite as bad as allocating under the mutex, particularly if a move operation has been carried out so there is little left to actually deallocate). [snip] > operation on the item type will be no-throwing. > Not that it necessarily actually moves, but just that it doesn't > throw. I think that is a good approach. The problem though is that if RVO is not carried out you get a copy rather than a move generated by the return statement. I am quite disappointed to find that C++11 does not required named object return statements to be optimized away: I thought the proposal was that they would be. Maybe there were reasons for not proceeding with that. Having said that, I am not allergic to out parameters. Chris |
| Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Mar 21 01:05AM On Tue, 21 Mar 2017 01:00:08 +0000 > I think that is a good approach. The problem though is that if RVO is > not carried out you get a copy rather than a move generated by the > return statement. I am quite disappointed to find that C++11 does not ^^^^^ C++17 |
| "Chris M. Thomasson" <invalid@invalid.invalid>: Mar 20 09:28PM -0700 On 3/20/2017 6:05 PM, Chris Vine wrote: >> return statement. I am quite disappointed to find that C++11 does not > ^^^^^ > C++17 Sorry if sound daft here, but I cannot really see a problem wrt exceptions occurring in the following "out" version of the pop function of the queue: ____________________________ template<typename T> struct queue { typedef std::deque<T> raw_queue_t; raw_queue_t m_queue; std::condition_variable m_cond; std::mutex m_mutex; void push(T const& obj) { { std::unique_lock<std::mutex> lock(m_mutex); m_queue.push_back(obj); } m_cond.notify_one(); } void pop(T& out) { std::unique_lock<std::mutex> lock(m_mutex); while (!m_queue.size()) m_cond.wait(lock); out = m_queue.front(); m_queue.pop_front(); } }; ____________________________ Much better to me. The calling code can deal with that out parameter in whatever way it wants to. ;^) The way it was before, function returning, had this damn "ghost" user type T out there: ____________________________ T pop() { T front; // HELLO? ARE YOU THERE? { std::unique_lock<std::mutex> lock(m_mutex); while (! m_queue.size()) m_cond.wait(lock); front = m_queue.front(); m_queue.pop_front(); } return front; } ____________________________ I must of had POD's on the mind. Yikes! Anyway, I thank you all for the most valuable input. This has to be signal in the NG... :^) |
| "Chris M. Thomasson" <invalid@invalid.invalid>: Mar 20 09:30PM -0700 On 3/20/2017 9:28 PM, Chris M. Thomasson wrote: > out = m_queue.front(); > m_queue.pop_front(); > } If an exception occurs in pop_front, the out param will still hold a reference to the front of the queue. Not sure if this is Kosher or not. I guess we can say, if this throws, the out param is undefined, even thought it references the front of the queue. Make any sense? Thanks. |
| kushal bhattacharya <bhattacharya.kushal4@gmail.com>: Mar 20 09:53PM -0700 hi your mechanism is working fine but in my case i am implementing in a totally different scenario |
| "Chris M. Thomasson" <invalid@invalid.invalid>: Mar 20 09:56PM -0700 On 3/20/2017 9:53 PM, kushal bhattacharya wrote: > hi your mechanism is working fine but in my case i am implementing in a totally different scenario How? Are you iterating over the deque in some cases without removing anything from it? Like reader threads, reader access? Non-mutating access? We need more details... |
| kushal bhattacharya <bhattacharya.kushal4@gmail.com>: Mar 20 10:00PM -0700 Hi, First i want to tell you where i am implementing this. I am implementing it for building the mqtt broker. So the parsing thread is run according to the packets coming from the client and accordingly it is pushing information into the queue as a message object. When the message is available from the queue,it is pulled out from the queue by the prespawnned transmission threads so in those cases i am popping it out from the dequeue and also in the parser thread when all the acknowledgments are over. |
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 21 06:15AM +0100 On 21-Mar-17 5:30 AM, Chris M. Thomasson wrote: > reference to the front of the queue. Not sure if this is Kosher or not. > I guess we can say, if this throws, the out param is undefined, even > thought it references the front of the queue. The `=` here gives you an assignment, not a reference re-seating (i.e., it this throws then the out param does not reference the front of the queue – it is instead unchanged, if that assignment guarantees that). Since `m_queue.front()` produces a regular reference, the `=` here denotes a regular copy assignment, not a move assignment. And that's great for exception safety, because if T has a copy assignment that offers the success-or-unchanged-with-exception guarantee, which Dave Abrahams called "the strong exception guarantee", then all that happens if that assignment fails is that the assignment fails, and execution of pop() is terminated via the exception. And so this pop() offers the strong exception guarantee contingent on T having a copy assignment operator with that guarantee. One cost is the requirement that T must be copy assignable and – for practical use – default constructible. Another cost is that the T object in the client code can't be `const`. • • • One possible optimization is to use the `noexcept` /operator/ to check if a move assignment to `T` can throw. And if it cannot, then just use a move assignment, knowing that the possibly pilfered-for-resources front of queue object will be removed in the next statement. Like this, where the check with noexcept does not actually execute the expression: if( noexcept( out = move( m_queue.front() ) ) ) { out = move( m_queue.front() ); // Move, pilfering resources. } else { out = m_queue.front(); // Copy, possible exception. } > Make any sense? Thanks. Yes. But also see the solution I posted in "One way to get strong exception guarantee for a multithreaded pop: a Dequeued_ class". It requires only that T is copy constructible. The main other cost is an unconventional notation for popping. Namely, not writing T const item = q.pop(); // Sort of the ideal to strive for! but Dequeued_<T> const item{ q }; // Good enough, IMHO. :) I did not optimize that for noexcept, but it's not difficult to add. Cheers!, - Alf |
| Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Mar 21 10:54AM On Mon, 20 Mar 2017 21:30:52 -0700 > not. I guess we can say, if this throws, the out param is undefined, > even thought it references the front of the queue. > Make any sense? Thanks. Yes, this is exactly what I was suggesting you should do. There are variations on this theme. I would move into 'out': out = std::move(m_queue.front()); and document that this pop is only strongly exception safe if either the queue item's move constructor doesn't throw or, if it does, the move is itself strongly exception safe (it either succeeds or leaves the object unchanged). Giving this method the same exception safety as the queue item's move assignment operator, and documenting that, seems perfectly reasonable to me. Alf has come up with a more sophisticated version which selects on whether the move assignment operator is noexcept or not - I didn't actually know you could do that. But that leaves out the case of a queue item with a move assignment operator which is strongly exception safe, and surely most are. With a list there is the option of a splice, as I have mentioned. This would only give strong exception safety if the queue item's move assignment operator does not throw, otherwise it only gives the basic guarantee. The basic guarantee gives you pretty much all you need for many purposes, and is all you are going to get out of a lock free queue. Chris |
| Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Mar 21 10:55AM On Tue, 21 Mar 2017 10:54:08 +0000 Chris Vine <chris@cvine--nospam--.freeserve.co.uk> wrote: [snip] > out = std::move(m_queue.front()); > and document that this pop is only strongly exception safe if either > the queue item's move constructor doesn't throw or, if it does, the ^^^^^^^^^^^^^^^^ move assignment operator |
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 21 03:45AM +0100 Based on Chris M. Thomasson's earlier example of producer/consumer queue. The idea here is to replace the named `pop()` operation with a wrapper class for the result, here called `Dequeued_`. I think that's new because I haven't seen it anywhere. #include <cstdio> #include <deque> #include <condition_variable> #include <mutex> #include <memory> #include <thread> #include <utility> // std::move #include <algorithm> #include <cassert> namespace g{ std::mutex std_out_mutex; } // namespace g using Lock = std::unique_lock<std::mutex>; template< class... Args > void say( char const* format, Args... args ) { Lock const lock{ g::std_out_mutex }; printf( format, args... ); } template< class Item > class Dequeued_ { private: Item item_; template< class Queue > static auto get_lock( Queue& q ) -> Lock { Lock lock{ q.m_mutex }; while( q.m_queue.size() == 0 ) { q.m_cond.wait( lock ); } return lock; // It's moved. } template< class Raw_queue > Dequeued_( Lock, Raw_queue& raw_q ) : item_{ raw_q.front() } { raw_q.pop_front(); } public: auto item() const -> Item const& { return item_; } auto item() -> Item& { return item_; } template< class Queue > Dequeued_( Queue& q ) : Dequeued_{ get_lock( q ), q.m_queue } {} }; template< class Item > class Queue_ { template< class > friend class Dequeued_; private: std::deque<Item> m_queue; std::condition_variable m_cond; std::mutex m_mutex; public: void push( Item obj ) { { Lock const lock{ m_mutex }; m_queue.push_back( std::move( obj ) ); } m_cond.notify_one(); } auto pop() -> Item = delete; // Use Dequeued_ instead. }; using String_queue = Queue_<unsigned>; int const n_consumers = 5; int const N = 1000; void producer( String_queue& queue ) { say( "producer::queue::(%p) - enter\n", (void*)&queue ); for( unsigned i = 0; i < N; ++i ) { queue.push( i + 1 ); std::this_thread::yield(); // just for some spice } for( unsigned i = 0; i < n_consumers; ++i ) { queue.push( 0 ); std::this_thread::yield(); // just for some spice } say( "producer::queue::(%p) - exit\n", (void*)&queue ); } void consumer( unsigned const id, String_queue& queue ) { say( "consumer(%u)::queue::(%p) - enter\n", id, (void*)&queue ); unsigned prev = 0; for (;;) { Dequeued_<unsigned> const msg{ queue }; say( "consumer(%u)::msg::(%u)\n", id, msg.item() ); if( msg.item() == 0 ) break; assert( msg.item() > prev ); // validate fifo nature prev = msg.item(); } say( "consumer::queue::(%p) - exit\n", (void*)&queue ); } auto main() -> int { String_queue queue; std::thread consumers[n_consumers]; for( unsigned i = 0; i < n_consumers; ++i ) { consumers[i] = std::thread( consumer, i, std::ref( queue ) ); } std::thread producer_thread( producer, std::ref(queue) ); producer_thread.join(); for( auto& t : consumers ) { t.join(); } } Cheers!, - Alf |
| "Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 21 03:43AM -0700 Did you mean to post a topic with ",," for the subject? Thank you, Rick C. Hodgin |
| Juha Nieminen <nospam@thanks.invalid>: Mar 21 07:01AM >> > kushal bhattacharya, you misunderstand the purpose of my posts. >> Nobody cares about your posts, you lying hypocrite. > I've asked you before: How am I a hypocrite? And I have told you several times: Because even you yourself don't follow the scriptures you are preaching to others. |
| "Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Mar 21 03:41AM -0700 Juha Nieminen wrote: > And I have told you several times: Because even you yourself > don't follow the scriptures you are preaching to others. Such as ... ? Thank you, Rick C. Hodgin |
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 21 03:47AM +0100 Based on Chris M. Thomasson's earlier example of producer/consumer queue. The idea here is to replace the named `pop()` operation with a wrapper class for the result, here called `Dequeued_`. I think that's new because I haven't seen it anywhere. #include <cstdio> #include <deque> #include <condition_variable> #include <mutex> #include <memory> #include <thread> #include <utility> // std::move #include <algorithm> #include <cassert> namespace g{ std::mutex std_out_mutex; } // namespace g using Lock = std::unique_lock<std::mutex>; template< class... Args > void say( char const* format, Args... args ) { Lock const lock{ g::std_out_mutex }; printf( format, args... ); } template< class Item > class Dequeued_ { private: Item item_; template< class Queue > static auto get_lock( Queue& q ) -> Lock { Lock lock{ q.m_mutex }; while( q.m_queue.size() == 0 ) { q.m_cond.wait( lock ); } return lock; // It's moved. } template< class Raw_queue > Dequeued_( Lock, Raw_queue& raw_q ) : item_{ raw_q.front() } { raw_q.pop_front(); } public: auto item() const -> Item const& { return item_; } auto item() -> Item& { return item_; } template< class Queue > Dequeued_( Queue& q ) : Dequeued_{ get_lock( q ), q.m_queue } {} }; template< class Item > class Queue_ { template< class > friend class Dequeued_; private: std::deque<Item> m_queue; std::condition_variable m_cond; std::mutex m_mutex; public: void push( Item obj ) { { Lock const lock{ m_mutex }; m_queue.push_back( std::move( obj ) ); } m_cond.notify_one(); } auto pop() -> Item = delete; // Use Dequeued_ instead. }; using String_queue = Queue_<unsigned>; int const n_consumers = 5; int const N = 1000; void producer( String_queue& queue ) { say( "producer::queue::(%p) - enter\n", (void*)&queue ); for( unsigned i = 0; i < N; ++i ) { queue.push( i + 1 ); std::this_thread::yield(); // just for some spice } for( unsigned i = 0; i < n_consumers; ++i ) { queue.push( 0 ); std::this_thread::yield(); // just for some spice } say( "producer::queue::(%p) - exit\n", (void*)&queue ); } void consumer( unsigned const id, String_queue& queue ) { say( "consumer(%u)::queue::(%p) - enter\n", id, (void*)&queue ); unsigned prev = 0; for (;;) { Dequeued_<unsigned> const msg{ queue }; say( "consumer(%u)::msg::(%u)\n", id, msg.item() ); if( msg.item() == 0 ) break; assert( msg.item() > prev ); // validate fifo nature prev = msg.item(); } say( "consumer::queue::(%p) - exit\n", (void*)&queue ); } auto main() -> int { String_queue queue; std::thread consumers[n_consumers]; for( unsigned i = 0; i < n_consumers; ++i ) { consumers[i] = std::thread( consumer, i, std::ref( queue ) ); } std::thread producer_thread( producer, std::ref(queue) ); producer_thread.join(); for( auto& t : consumers ) { t.join(); } } Cheers!, - Alf |
| kushal bhattacharya <bhattacharya.kushal4@gmail.com>: Mar 20 09:56PM -0700 On Tuesday, March 21, 2017 at 8:17:34 AM UTC+5:30, Alf P. Steinbach wrote: > } > Cheers!, > - Alf hi, I have onw question regarding the implementation. Are you putting the lock when you are enquiring for size of queue in the while loop ? |
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 21 06:24AM +0100 On 21-Mar-17 5:56 AM, kushal bhattacharya wrote: > hi, > I have onw question regarding the implementation. > Are you putting the lock when you are enquiring for size of queue in the while loop ? Yes. Here's that code: Lock lock{ q.m_mutex }; while( q.m_queue.size() == 0 ) { q.m_cond.wait( lock ); } return lock; // It's moved. The declaration of lock acquires the mutex; it's now locked. Next, the while loop condition, checking the queue size, is evaluated, with the mutex acquired. Only one thread can have it at a time, and right now it's this thread. The `q.m_cond.wait( lock )` statement then releases the mutex, opening up for other threads to access the queue. In particular, to push items onto it. When that `wait` returns it acquires the mutex again. And so in the next iteration of the loop, if there is one, the mutex is again held by this thread when the queue size is checked, and so on, and on. Finally the `return` /moves/ the lock back up to the caller of this function. The `std::unique_lock` guarantees its move semantics, that the locking is actually moved. You can find documentation of these details at <url: http://cppreference.com>. Cheers!, - Alf |
| "Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Mar 21 12:59AM +0100 On 20-Mar-17 9:13 PM, Christopher Pisz wrote: > Is there still no mechanism to handle enums in derived classes even > in C++11, 14, and 17? An enum is a value type, and that means that IS-A relationship for an extension goes the opposite way of ordinary class inheritance. For example, if enums could be used interchangably then you would have that enum{a, b, c} IS-A enum{a, b, c, d, e}, because any value of the former is a valid value of the latter. In much the same way rational numbers extends the set of values of integers, and you just wouldn't derive class Rational from class Integer -- an Integer always converts to an exactly corresponding Rational and in that sense Integer IS-A Rational, but it's not the case that every Rational IS-A Integer. To do this properly one needs a kind of /extension/ feature, not an inheritance feature. I believe this was first discussed by M.I.T. professor Barbara Liskov, in the same work where she formulated her famous substitution principle. It should not be difficult to see why the two things relate. ;-) > class Derived : public Base { enum Color {Yellow}; }; > where we want derived to have Colors: RED, BLUE, GREEN, and YELLOW > Is the way to do this still to use static const values instead? I'm not aware of any good solution, sorry. The programming languages that I'm familiar with do not support notion. Cheers!, - Alf |
| Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Mar 21 01:20AM On Mon, 20 Mar 2017 13:47:02 -0700 (PDT) > P.S do you see my email address every time I post to this newsgroup > via crappy Google Groups? I see it when I hit reply. The thought of > the incoming spam is scarey. Yes I do. But the google mail spam filters are pretty good. |
| legalize+jeeves@mail.xmission.com (Richard): Mar 21 04:55AM [Please do not mail me a copy of your followup] I just don't see the utility in adding virtual function call overhead to enums. As Alf pointed out, enums are basically just fancy named integral types. If you want polymorphism and exceptions during conversion, then you can write your own classes to do that and have them be constructred from fancy named integral values. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
| "Chris M. Thomasson" <invalid@invalid.invalid>: Mar 20 05:01PM -0700 Here is an example of using a SPMC (single producer/multi consumer) queue. It blocks using spin wait. However, this can be avoided through the use of eventcounts, or even fast semaphores. The test will complete, give it a minute or two, it does 10000000 iterations with 7 consumer threads. The entire example code can be found here: http://pastebin.com/raw/6nG6t422 Can you run this shi%? Any thoughts, questions? ;^o ____________________________________ #include <cstdio> #include <deque> #include <condition_variable> #include <mutex> #include <memory> #include <thread> #include <atomic> #include <algorithm> #include <cassert> #define mb_relaxed std::memory_order_relaxed #define mb_consume std::memory_order_consume #define mb_acquire std::memory_order_acquire #define mb_release std::memory_order_release #define mb_acq_rel std::memory_order_acq_rel #define mb_seq_cst std::memory_order_seq_cst #define mb_fence(mb) std::atomic_thread_fence(mb) // Just a check... static std::atomic<unsigned long> g_alloc_count(0); // simple single producer/multi-consumer queue struct node { node* m_next; node() : m_next(nullptr) {} // tidy... }; struct spmc_queue { std::atomic<node*> m_head; spmc_queue() : m_head(nullptr) {} // push a single node void push(node* const n) { node* head = m_head.load(mb_relaxed); for (;;) { n->m_next = head; mb_fence(mb_release); if (m_head.compare_exchange_weak(head, n, mb_relaxed)) { break; } } } // try to flush all of our nodes node* flush(node* const nhead = nullptr) { node* n = m_head.exchange(nhead, mb_relaxed); if (n) { mb_fence(mb_acquire); } return n; } }; // Spin-Wait, Blocking Adapt Function node* spmc_queue_spin_lock_flush( spmc_queue& queue ) { node* n = nullptr; for (;;) { n = queue.flush(); if (n) break; std::this_thread::yield(); } return n; } #define CONSUMERS 7 #define N 10000000 struct user_data : public node { int m_foo; user_data(int foo) : m_foo(foo) {} }; void producer_thread( unsigned int id, std::mutex& std_out_mutex, spmc_queue& queue ){ { std::unique_lock<std::mutex> lock(std_out_mutex); std::printf("producer(%u)::queue(%p) - Entry\n", id, (void*)&queue); } for (unsigned int i = 0; i < N; ++i) { user_data* ud = new user_data(i + 1); // allocate memory g_alloc_count.fetch_add(1, mb_relaxed); queue.push(ud); if (! ((i + 1) % 1003)) { std::unique_lock<std::mutex> lock(std_out_mutex); std::printf("producer(%u)::queue(%p) - produced(%u)\n", id, (void*)&queue, i + 1); } } for (unsigned int i = 0; i < CONSUMERS; ++i) { user_data* ud = new user_data(0); // allocate memory g_alloc_count.fetch_add(1, mb_relaxed); queue.push(ud); } { std::unique_lock<std::mutex> lock(std_out_mutex); std::printf("producer(%u)::queue(%p) - Exit\n", id, (void*)&queue); } } void consumer_thread( unsigned int id, std::mutex& std_out_mutex, spmc_queue& queue ){ { std::unique_lock<std::mutex> lock(std_out_mutex); std::printf("consumer(%u)::queue(%p) - Entry\n", id, (void*)&queue); } { for (unsigned long i = 0 ;; ++i) { // Wait for something... user_data* ud = (user_data*)spmc_queue_spin_lock_flush(queue); assert(ud); // make sure we have something! int counter = 0; while (ud) { node* ud_next = ud->m_next; unsigned int foo = ud->m_foo; delete ud; // reclaim memory g_alloc_count.fetch_sub(1, mb_relaxed); if (foo == 0) { // We have recieved a "stop" signal counter++; } if (!((i + 1) % 1003)) { std::unique_lock<std::mutex> lock(std_out_mutex); std::printf("consumer(%u)::queue(%p) - consumed(foo:%u)\n", id, (void*)&queue, foo); } ud = (user_data*)ud_next; } std::this_thread::yield(); // just for spice... while (counter > 1) { // Replay all of the excess stop signals user_data* ud = new user_data(0); // allocate memory g_alloc_count.fetch_add(1, mb_relaxed); queue.push(ud); --counter; { std::unique_lock<std::mutex> lock(std_out_mutex); std::printf("consumer(%u)::queue(%p) - replay(%u) *****************\n", id, (void*)&queue, counter); } } if (counter == 1) { // We are fin! break; } } } { std::unique_lock<std::mutex> lock(std_out_mutex); std::printf("consumer(%u)::queue(%p) - Exit\n", id, (void*)&queue); } } int main(void) { { spmc_queue queue; std::thread consumers[CONSUMERS]; std::mutex std_out_mutex; for (unsigned int i = 0; i < CONSUMERS; ++i) { consumers[i] = std::thread( consumer_thread, i + 1, std::ref(std_out_mutex), std::ref(queue) ); } std::thread producer( producer_thread, 0, std::ref(std_out_mutex), std::ref(queue) ); producer.join(); for (unsigned int i = 0; i < CONSUMERS; ++i) { consumers[i].join(); } } std::printf("g_alloc_count:(%lu)\n", g_alloc_count.load(mb_relaxed)); assert(g_alloc_count.load(mb_relaxed) == 0); std::printf("\nComplete, hit <ENTER> to exit...\n"); std::fflush(stdout); std::getchar(); return 0; } ____________________________________ There is it, the whole pile. ;^) It should compile! |
| "Chris M. Thomasson" <invalid@invalid.invalid>: Mar 20 05:11PM -0700 On 3/20/2017 5:01 PM, Chris M. Thomasson wrote: > // try to flush all of our nodes > node* flush(node* const nhead = nullptr) > { I forgot to comment here that if (nhead) is not a nullptr, the calling code should ensure release membar semantics, Something like: if (nhead) mb_fence(mb_release); **** before the atomic exchange **** Not sure if this coded precaution is actually needed right here, but the damn comment sure as hell is! The test does not make use of nhead != nullptr condition... > return n; > } > }; [...] |
| "Chris M. Thomasson" <invalid@invalid.invalid>: Mar 20 09:10PM -0700 On 3/20/2017 5:01 PM, Chris M. Thomasson wrote: > Here is an example of using a SPMC (single producer/multi consumer) > queue. It blocks using spin wait. However, this can be avoided through > the use of eventcounts, or even fast semaphores. Funny thing is that the code is totally safe for multiple producers. The actual test is single producer. lol. I mixed up some of my comments and names when I was getting the code ready. Sorry. Also, the nature of the sync is LIFO. This sync device returns all items at once in the flush function, and they are arranged as a stack, not queue. One point, one can reverse the singly linked list returned from flush to get FIFO... Again, sorry for any confusions. Ahhh, I should have kept all of this secret and waited for somebody else to point it out! ;^o Fwiw, its been a little while since I have coded these sync primitives as individual little test programs. I have an older private library of them, still using custom asm code, but the lib is not comprised of little example programs. |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment