- cmsg cancel <ngnos0$g12$2@dont-email.me> - 8 Updates
- About my new scalable synchronization algorithm - 1 Update
- A new scalable synchronization algorithm is coming... - 2 Updates
- Read this about Dmitry Vyukov lockfree concurrent Skiplist - 1 Update
- We have to be smart... - 2 Updates
- My C++ synchronization objects library was updated.. - 1 Update
- My C++ synchronization objects library was extended.. - 1 Update
bleachbot <bleachbot@httrack.com>: May 08 06:21PM +0200 |
bleachbot <bleachbot@httrack.com>: May 08 09:18PM +0200 |
bleachbot <bleachbot@httrack.com>: May 08 10:43PM +0200 |
bleachbot <bleachbot@httrack.com>: May 08 10:46PM +0200 |
bleachbot <bleachbot@httrack.com>: May 09 01:14AM +0200 |
bleachbot <bleachbot@httrack.com>: May 09 03:15AM +0200 |
bleachbot <bleachbot@httrack.com>: May 09 03:38AM +0200 |
bleachbot <bleachbot@httrack.com>: May 09 03:56AM +0200 |
Ramine <ramine@1.1>: May 08 09:57PM -0700 Hello... About my new scalable synchronization algorithm that i have talked to you just before... I will implement it with FreePascal compiler and the Delphi compiler, and compile it use this implementation as a Dynamic Link library in the C++ side, as i have done it with my C++ synchronization objects library, because you have to underatand me Sir and Madam, since C++ uses a weak memory model even on x86, this is error prone, so since FreePascal compiler and Delphi compiler don't reorder loads and stores on x86 , this has facilitate the reasoning about sequential consistency, this is why i have done it this way in C++ using a Dynamic Link Library compiled with FreePascal compiler that contains the implementation of my algorithms, this is much safer than the C++ way. Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 08 09:19PM -0700 Hello, A new scalable synchronization algorithm is coming... I was thinking more and more, and i have come to a conclusion that Seqlock or my SeqlockX are not suitable for realtime critical systems, because they are lockfree, so prone to starvation. So i am right now finishing a new algorithm and implementing it in Object Pascal and C++, this new algorithm is a scalable reader-writer mutex that is costless on the reader side, as is RCU, and in the writer side it is uses a distributed technic as the distributed reader-writer mutex, this algorithm is FIFO fair on the writer side and FIFO fair on the reader side and it is of course starvation-free, so it is suitable for realtime critical systems, the simplicity of use by users of the implementation of this algorithm makes it more suitable that the much harder approach of RCU. So stay tuned , my new algorithm and its implementation in C++ and Object pascal is coming soon... Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 08 09:38PM -0700 Hello, Look at this Asymmetric rw_mutex with atomic-free fast-path for readers by Dmitry Vyukov: https://groups.google.com/forum/#!topic/lock-free/Hv3GUlccYTc My new algorithm and its implementation that is coming soon doesn't use epoch detection logic, an it uses a distributed technic, and it will be suitable for realtime critical systems, and the simplicity of use by users of the implementation of this algorithm makes it more suitable that the much harder approach of RCU. Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 08 07:15PM -0700 Hello... Read this about Dmitry Vyukov lockfree concurrent Skiplist: http://www.1024cores.net/home/parallel-computing/concurrent-skip-list I think that this lockfree algorithm is bad for realtime critical systems, because there is a loop around a CAS in the writer side, that makes the writer side not free from starvation. So it is not suitable for realtime critical systems. So we can generalize this and say that lockfree algorithms are not suitable for realtime critical systems. This is why locks and FIFO fairness are useful in realtime critical systems. Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 08 04:44PM -0700 Hello Sir and Madam, We have to be smart... My scalable DRWLock and my scalable DRWLockX of my C++ synchronization objects library are scalable but they have a cost of 10 cycles on the reader side, so if you are reading memory or reading from the disk this cost will be amortized a lot, but if your reader section is smaller than 10 cycles , please use my scalable SeqlockX that is very efficient and costless on the reader side, my scalable SeqlockX that is a variant of Seqlock that eliminates the weakness of Seqlock that is "livelock" of the readers when there is more writers. So as you have noticed C++ synchronization objects library is powerful and great. You can download C++ synchronization objects library from: https://sites.google.com/site/aminer68/c-synchronization-objects-library Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 08 04:47PM -0700 On 5/8/2016 4:44 PM, Ramine wrote: > My scalable DRWLock and my scalable DRWLockX of my > C++ synchronization objects library are scalable > but they have a cost of 10 cycles on the reader side, so I mean 10 CPU cycles. |
Ramine <ramine@1.1>: May 08 03:21PM -0700 Hello, My C++ synchronization objects library was updated.. I have just corrected a minor bug in my scalable DRWLock and scalable DRWLockX, i have updated them to version 1.48 And i have tested my library thoroughly, and now i think that it is more stable and fast. You can download C++ synchronization objects library from: https://sites.google.com/site/aminer68/c-synchronization-objects-library Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: May 08 12:24PM -0700 Hello, My C++ synchronization objects library was extended.. I have just included a Threadpool based on Pthread inside my C++ synchronization objects library.. You can download my C++ synchronization objects library from: https://sites.google.com/site/aminer68/c-synchronization-objects-library Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment