- peformance of wait notify concept of condition variable - 17 Updates
- Creating mutex pointer in c++ - 1 Update
"Chris M. Thomasson" <invalid@invalid.invalid>: Apr 30 05:15PM -0700 On 4/30/2017 5:43 AM, Bonita Montero wrote: > On my 3,6GHz Ryzen 1800X the multithreaded code runs with > about 7% CPU-load. And the singlethreaded code is about 6%. > So there's not a big difference. Bugs aside for a moment, I am wondering why you are creating an event in the IOCP code? This is not required at all. |
"Chris M. Thomasson" <invalid@invalid.invalid>: Apr 30 06:01PM -0700 On 4/30/2017 6:41 AM, Bonita Montero wrote: >> that the mutex _only_ protects the condition variable is faulty. > I didn't assume this. I only said, that the mutex is part of a cv. > That doesn't exclude the rest. Have you ever pounded a producer/consumer mutex protected cv based queue with 500 active threads? |
"Chris M. Thomasson" <invalid@invalid.invalid>: Apr 30 06:07PM -0700 On 4/29/2017 11:18 PM, kushal bhattacharya wrote: >> more and more frequent due to more requests and/or new connections, more >> users. 500 threads can spike to 2000+ threads. Just brainstorming here. > I couldnt really get how can 500 threads suddenly spike to 2000 threads Well, in a thread per-connection model thinking in the general sense wrt designing a server, say we start with 50 users. Then all of a sudden there are 250 because your a plurality of your friends told others about the good thing. Then, you check the active server load and notice that there is now 500 active users! Next month, there may be 1500+ active users. Imho, this is a _very_ important aspect to think about when designing a server. |
"Chris M. Thomasson" <invalid@invalid.invalid>: Apr 30 06:25PM -0700 On 4/30/2017 5:15 PM, Chris M. Thomasson wrote: >> So there's not a big difference. > Bugs aside for a moment, I am wondering why you are creating an event in > the IOCP code? This is not required at all. You need to associate the file with the IOCP, then issue overlapped requests. You have a phantom overlapped io before the attachment to the iocp occurs, and to make things worse, you have a totally unneeded event object hooked up to it. |
Bonita Montero <Bonita.Montero@gmail.com>: May 01 07:56AM +0200 > Bugs aside for a moment, I am wondering why you are creating > an event in the IOCP code? This is not required at all. This is only once where I'm creating the file. An IOCP is not necessary at this place and waiting for an event ist sufficient because there is only one I/O-event to wait for. |
Bonita Montero <Bonita.Montero@gmail.com>: May 01 08:03AM +0200 > You have a phantom overlapped io before the attachment to > the iocp occurs, and to make things worse, you have a totally > unneeded event object hooked up to it. Should I use IOCPs for a _single_ I/O-request to enlargen the file? |
scott@slp53.sl.home (Scott Lurndal): May 01 12:23PM >> Please learn to quote properly with attribution. Your assumption >> that the mutex _only_ protects the condition variable is faulty. >I didn't assume this. I only said, that the mutex is part of a cv. But it is not. It is a separate entity used to protect the predictate _for_ the condition variable. |
Bonita Montero <Bonita.Montero@gmail.com>: May 01 02:32PM +0200 > But it is not. It is a separate entity used to protect the > predictate _for_ the condition variable. Your pettifogging is silly. |
"Chris M. Thomasson" <invalid@invalid.invalid>: May 01 07:49AM -0700 On 4/30/2017 10:56 PM, Bonita Montero wrote: > This is only once where I'm creating the file. An IOCP is not > necessary at this place and waiting for an event ist sufficient > because there is only one I/O-event to wait for. Afaict, you are not waiting on that event to complete before you bind the file the IOCP. So, you do not know if the write was even finished before you start issuing reads. It would better to bind with IOCP; issue the write; wait for a completion, then issue the reads. |
"Chris M. Thomasson" <invalid@invalid.invalid>: May 01 07:55AM -0700 On 4/30/2017 11:03 PM, Bonita Montero wrote: >> the iocp occurs, and to make things worse, you have a totally >> unneeded event object hooked up to it. > Should I use IOCPs for a _single_ I/O-request to enlargen the file? Yes. Just bind to IOCP; write; wait on GQCS, then issue the reads. By the way, you don't even wait on that unnecessary event for the first write to complete before you bind it. You also fail to do this in the threaded version. Bad mojo. It basically creates a race condition. Also, can you please add proper attribution to your posts? |
Bonita Montero <Bonita.Montero@gmail.com>: May 01 05:04PM +0200 > Yes. Just bind to IOCP; write; wait on GQCS, then issue the reads. It's useless to use an IOCP when enlargen the file. Waiting for an event is sufficient an easier when there is only one I/O-request. > By the way, you don't even wait on that unnecessary event for > the first write to complete before you bind it. Ok, that's a mistake. |
"Chris M. Thomasson" <invalid@invalid.invalid>: May 01 08:13AM -0700 On 5/1/2017 8:04 AM, Bonita Montero wrote: >> Yes. Just bind to IOCP; write; wait on GQCS, then issue the reads. > It's useless to use an IOCP when enlargen the file. Waiting for an > event is sufficient an easier when there is only one I/O-request. Imho, its more complicated to use an event. You also forget to destroy that event. Your way: 1: create event 2: issue write 3: wait on event 4: destroy event 5: bind iocp 6: issue reads the other way: 1: bind iocp 2: issue write 3: wait on gqcs 4: issue reads >> By the way, you don't even wait on that unnecessary event for >> the first write to complete before you bind it. > Ok, that's a mistake. Yes. It creates a race-condition. |
Bonita Montero <Bonita.Montero@gmail.com>: May 01 05:29PM +0200 > Imho, its more complicated to use an event. > You also forget to destroy that event. I dindn't forget to destroy anything because this is only a simple test-app. > 2: issue write > 3: wait on gqcs > 4: issue reads 5: get gcqs for reads That's a matter of tase. > Yes. It creates a race-condition. Obvoiously the reads don't fail, so it seems that there is an implementation-defined behaviour. The write becomes visible to any following reads whithout being physically complete on disk. |
Ian Collins <ian-news@hotmail.com>: May 02 07:39AM +1200 On 05/ 2/17 12:32 AM, Bonita Montero wrote: >> But it is not. It is a separate entity used to protect the >> predictate _for_ the condition variable. > Your pettifogging is silly. Your lack of attributions is rude. -- Ian |
"Chris M. Thomasson" <invalid@invalid.invalid>: May 01 02:12PM -0700 On 5/1/2017 5:23 AM, Scott Lurndal wrote: >> I didn't assume this. I only said, that the mutex is part of a cv. > But it is not. It is a separate entity used to protect the > predictate _for_ the condition variable. Exactly correct. :^) |
"Chris M. Thomasson" <invalid@invalid.invalid>: May 01 02:39PM -0700 On 5/1/2017 12:39 PM, Ian Collins wrote: >>> predictate _for_ the condition variable. >> Your pettifogging is silly. > Your lack of attributions is rude. It destroys the integrity of the chain. |
"Chris M. Thomasson" <invalid@invalid.invalid>: May 01 03:37PM -0700 On 5/1/2017 8:29 AM, Bonita Montero wrote: > Obvoiously the reads don't fail, so it seems that there is an > implementation-defined behaviour. The write becomes visible to > any following reads whithout being physically complete on disk. Its a race-condition. Don't worry too much about it and thank you for creating the code. Fwiw, I just had to correct a mistake I made wrt giving the dimensions of a bitmap. I said 1920x1080, actually it is 960x540. Damn it to heck! Here is my correction: https://groups.google.com/d/msg/sci.crypt/xytM7aFRfjQ/elUM9F94AAAJ Just try to give some proper attributions in future posts. Heck, even a CT would be nice if you quote me. :^) |
"Chris M. Thomasson" <invalid@invalid.invalid>: May 01 08:25AM -0700 On 4/29/2017 12:50 AM, kushal bhattacharya wrote: > best method to do so so that i could use it as a normal mutex to pass > it to like lock_guard or to condition_variable's wait.Is it really > possible ? Not exactly sure what you mean. However, here is a quick example: _____________________________ std::mutex mtx; std::mutex* ptr = &mtx; { std::unique_lock<std::mutex> lock(*ptr); } _____________________________ |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment