Ramine <ramine@1.1>: Jan 25 07:38PM -0800 Hello, I have just took a look at the following algorithm of a scalable Reader-Writer Lock , here it is: http://concurrencyfreaks.blogspot.ca/search?updated-min=2015-01-01T00:00:00%2B01:00&updated-max=2016-01-01T00:00:00%2B01:00&max-results=7 and this algorithm is using the same algorithm as the following algorithm by Joe Duffy an architect at Microsoft: http://joeduffyblog.com/2009/02/20/a-more-scalable-readerwriter-lock-and-a-bit-less-harsh-consideration-of-the-idea/ I must say that they are both innefficient and i will explain to you why: I must say that we have to be carefull, because i have just read the above webpages about those more scalable reader/writer lock by an architect at microsoft called Joe Duffy and a PhD called Pedro Ramalhete... but you have to be carefull because those reader/writer locks are not really scalable, it is kind of garbage, and i will think as an architect and explain to you why... Here is the webpage, and my explanation follows... http://joeduffyblog.com/2009/02/20/a-more-scalable-readerwriter-lock-and-a-bit-less-harsh-consideration-of-the-idea/ So look inside the EnterWriteLock() of the reader/writer above, you will notice that it is first executing Interlocked.Exchange(ref m_writer, 1), that means it is atomicaly making m_writer equal 1, so that to block readers from entering the reader section, but this is garbage, cause look after that he is doing this: for (int i = 0; i < m_readers.Length; i++) while (m_readers[i].m_taken != 0) sw.SpinOnce(); So after making m_writers equal 1 so that to block the readers, he is transfering many cache-lines between cores, and this is really expensive and it will make the serial part of the Amdahl's law bigger and bigger when more and more cores will be used , so this will not scale, so it is garbage. The Dmitry Vyukov distributed reader-writer mutex doesn't have this weakness, because look at the source code here: http://www.1024cores.net/home/lock-free-algorithms/reader-writer-problem/distributed-reader-writer-mutex Because he is doing this on the distr_rw_mutex_wrlock() side: for (i = 0; i != mtx->proc_count; i += 1) pthread_rwlock_wrlock(&mtx->cell[i].mtx); So we have to be smart here and notice with me that as the "i" counter variable goes from 0 to proc_count, the reader side will still be allowed to enter and to enter again the reader section on scenarios with more contention, so in contrast with the above reader-writer lock, this part of the distributed lock is not counted as only a serial part of the Amdahl's law, because it allows also the reader threads to enter and to enter again the reader section, so this part contains a parallel part of the Amdahl's law, and this makes this distributed reader-writer lock to effectively scale. This is even better with my Distributed sequential lock , because my Distributed sequential lock scales even better than the distributed reader-writer mutex of Dmitry Vyukov. My explanation apply to the other algorithm of Pedro Ramalhete because they are both the same algorithm. You can download my Distributed sequential lock from: https://sites.google.com/site/aminer68/scalable-distributed-sequential-lock Hope you have understood my architect way of thinking. Thank you for your time. Amine Moulay Ramdane. |
Ramine <ramine@1.1>: Jan 25 04:35PM -0800 Hello.. What Hinton's Google Move Says About the Future of Machine Learning Read this: https://moalquraishi.wordpress.com/2013/03/17/what-hintons-google-move-says-about-the-future-of-machine-learning/ Amine Moulay Ramdane. |
Ramine <ramine@1.1>: Jan 25 04:21PM -0800 Hello.. Is "Deep Learning" a Revolution in Artificial Intelligence? Read this: http://www.newyorker.com/news/news-desk/is-deep-learning-a-revolution-in-artificial-intelligence Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment