- More about the scalability... - 1 Update
- cmsg cancel <n0btve$jt6$2@dont-email.me> - 2 Updates
- My new Scalable Distributed Reader-Writer Mutex,was updated to version 1.34 - 1 Update
Ramine <ramine@1.1>: Oct 22 08:33PM -0700 Hello, I was simulating the scalability of my new Scalable Distributed Reader-Writer Mutex version 1.34 on an x86 Quadcore, and i have noticed that it can scale up to 40X on 40 cores with a reader section of ~10 CPU cycles and with 0.1% to 0.2% of writers, so i think that in average it can scale up to 40X on 40 cores with 0.1% to 02% of writers, so if you want to scale much more that that you have to use my scalable SeqlockX. If you ask me why it can not scale more than 40X.. The answer is on the writer side of my scalable distributed reader-writer mutex, the serial part of the Amdahl's law of the writer side of my scalable distributed reader-writer mutex will get much bigger when you add more and more threads on more and more cores because you have to transfer cache-lines on the writer side of each RWLock of the Scalable distributed Reader-Writer Mutex...so you can not avoid this weakness that limit the scalability to an average of 40X. You can download the new updated version 1.34 from: https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex Thank you for your time. Amine Moulay Ramdane. |
bleachbot <bleachbot@httrack.com>: Oct 23 02:13AM +0200 |
bleachbot <bleachbot@httrack.com>: Oct 23 02:32AM +0200 |
Ramine <ramine@1.1>: Oct 22 08:15PM -0700 Hello, My new Scalable Distributed Reader-Writer Mutex was updated to version 1.34 You can download the new updated version 1.34 from: https://sites.google.com/site/aminer68/scalable-distributed-reader-writer-mutex Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment