- cmsg cancel <ngv88d$kpd$1@dont-email.me> - 7 Updates
- I have benchmarked my new algorithm - 2 Updates
- Read this, it is important - 1 Update
- To be more precise, read more... - 1 Update
- I correct, please read again... - 1 Update
- Look at this scalable Asymmetric rw_mux from Dmitry Vyukov - 1 Update
- A memory barrier is needed - 1 Update
bleachbot <bleachbot@httrack.com>: May 11 02:27PM +0200 |
bleachbot <bleachbot@httrack.com>: May 11 03:18PM +0200 |
bleachbot <bleachbot@httrack.com>: May 11 03:20PM +0200 |
bleachbot <bleachbot@httrack.com>: May 11 03:29PM +0200 |
bleachbot <bleachbot@httrack.com>: May 11 04:14PM +0200 |
bleachbot <bleachbot@httrack.com>: May 11 05:18PM +0200 |
bleachbot <bleachbot@httrack.com>: May 11 05:20PM +0200 |
Ramine <ramine@1.1>: May 11 08:18AM -0700 Hello... I have benchmarked my new algorithm of my scalable Asym_DRWLock that is a scalable Asymmetric Distributed reader-writer mutex, against scalable DRWLock that is a scalable Distributed Reader-Writer Mutex, and scalable Asym_DRWLock is 1.60 time faster than scalable DRWLock. So hope that you will love my great C++ synchronization objects library.. You can download it from: https://sites.google.com/site/aminer68/c-synchronization-objects-library Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 11 08:20AM -0700 Hello.. I have benchmarked my new algorithm of my scalable Asym_DRWLock that is a scalable Asymmetric Distributed reader-writer mutex, against scalable DRWLock that is a scalable Distributed Reader-Writer Mutex, and scalable Asym_DRWLock is 1.60 time faster than scalable DRWLock. So hope that you will love my great C++ synchronization objects library.. You can download it from: https://sites.google.com/site/aminer68/c-synchronization-objects-library Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 11 07:15AM -0700 Hello.... Hope you have read my previous post.. But look now at the source code of my SeqlockX algorithm here: https://sites.google.com/site/aminer68/scalable-seqlockx There is a load on x86 on the reader side section of my SeqlockX algorithm like this: myid2:=FCount4^.fcount4; This is why it works , because loads and stores on x86 are not reordered with this load above, so it permit my SeqlockX to not use any atomics or fences on the the reader side. So i think that's not possible with this Asymmetric rw_mutex from Dmitry Vyukov as i have explained before: https://groups.google.com/forum/#!topic/lock-free/Hv3GUlccYTc And i think that's not possible with my scalable Asymmetric Distributed reader-writer mutex, because it needs an x86 fence on the reader side of the critical section. So if you need a costless sychronization mechanism on the reader side that elminates livelock when there is more writers, use my SelqockX implementation of my algorithm. Hope you have understood what i mean. You can download my great and updated C++ synchronization objects library from: https://sites.google.com/site/aminer68/c-synchronization-objects-library Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 11 09:30AM -0700 Hello..... Look at this scalable Asymmetric rw_mutex from Dmitry Vyukov: https://groups.google.com/forum/#!topic/lock-free/Hv3GUlccYTc If you have noticed he is using a compiler "_ReadWriteBarrier()", but this doesn't emmit a fence on x86, it is just a compiler barrier, but i am wondering how can he do it this way because loads of the reader critical section can be reordered on x86 with the following store: reader_inside[current_thread_index] = true; So i think it is a bug, can you please shade some light on this ? Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 11 09:21AM -0700 Hello... Look at this scalable Asymmetric rw_mutex from Dmitry Vyukov: https://groups.google.com/forum/#!topic/lock-free/Hv3GUlccYTc If you have noticed he is using a compiler "_ReadWriteBarrier()", but this doesn't emmit a fence on x86, it is just a compiler barrier, but i am wondering how can he do it this way because loads of the reader critical section are reordered with the following store: reader_inside[current_thread_index] = true; So i think it is a bug, can you please share a light on this ? Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 11 09:19AM -0700 Hello.... Look at this scalable Asymmetric rw_mux from Dmitry Vyukov: https://groups.google.com/forum/#!topic/lock-free/Hv3GUlccYTc If you have noticed he is using a compiler "_ReadWriteBarrier()", but this doesn't emmit a fence on x86, it is just a compiler barrier, but i am wondering how can he do it this way because loads of the reader critical section are reordered with the following store: reader_inside[current_thread_index] = true; So i think it is a bug, can you please share a light on this ? Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 11 08:29AM -0700 Hello, I have just looked at my algorithm of my scalable Asymmetric Distributed Reader-Writer mutex, and i think it needs a memory barrier on the reader-side, because on x86 loads of the inside reader section can be reordered with the following store: myid1:=0; FCount1^[myid1].fcount1:=FCount1^[myid1].fcount1+1; So now i think you can be more confident.. You can download my new and updated C++ synchronization objects library from: https://sites.google.com/site/aminer68/c-synchronization-objects-library and you can download my updated scalable DRWLock from: https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment