Tuesday, December 9, 2014

Digest for comp.programming.threads@googlegroups.com - 8 updates in 5 topics

comp.programming.threads@googlegroups.com Google Groups
Unsure why you received this message? You previously subscribed to digests from this group, but we haven't been sending them for a while. We fixed that, but if you don't want to get these messages, send an email to comp.programming.threads+unsubscribe@googlegroups.com.
Ramine <ramine@1.1>: Dec 08 07:09PM -0800

Hello,
 
 
We have to be more precise in computer science...
 
To be more sure i have just benchmarked a cache-line
transfer between cores and you will not believe it !
it is so expensive on x86 ! cause it takes around
800 CPU cycles , so i think i have correctly reasoned
when i have said that the following reader-writer lock
(of Joe Duffy an architect at Microsoft) is a garbage.
 
So please reread carefully my reasonning that follows
cause it is correct and true:
 
 
I must say that we have to be carefull, because i have just
read the following webpage about a more scalable reader/writer lock
by an architect at microsoft called Joe Duffy... but you have to be
carefull because this reader/writer lock is not really scalable,
it is a garbage, and i will think as an architect and explain to you why...
 
Here is the webpage, and my explanation follows...
 
 
http://joeduffyblog.com/2009/02/20/a-more-scalable-readerwriter-lock-and-a-bit-less-harsh-consideration-of-the-idea/
 
 
So look inside the EnterWriteLock() of the reader/writer above,
you will notice that it is first executing Interlocked.Exchange(ref
m_writer, 1), that means it is atomicaly making m_writer equal 1,
so that to block readers from entering the reader section,
but this is garbage, cause look after that he is doing this:
 
for (int i = 0; i < m_readers.Length; i++)
 
while (m_readers[i].m_taken != 0) sw.SpinOnce();
 
 
So after making m_writers equal 1 so that to block the readers,
he is transfering many cache-lines between cores, and this is really
expensive and it will make the serial part of the Amdahl's law bigger
and bigger when more and more cores will be used , so this will not
scale, so it is garbage.
 
The Dmitry Vyukov distributed reader-writer mutex doesn't have this
weakness, because look at the source code here:
 
http://www.1024cores.net/home/lock-free-algorithms/reader-writer-problem/distributed-reader-writer-mutex
 
 
Because he is doing this on the distr_rw_mutex_wrlock() side:
 
for (i = 0; i != mtx->proc_count; i += 1)
pthread_rwlock_wrlock(&mtx->cell[i].mtx);
 
 
So we have to be smart here and notice with me that as the "i" counter
variable goes from 0 to proc_count, the reader side will still be
allowed to enter and to enter again the reader section on scenarios with
more contention, so in contrast with the above reader-writer lock, this
part of the distributed lock is not counted as only a serial part of the
Amdahl's law, because it allows also the reader threads to enter
and to enter again the reader section, so this part contains a parallel
part of the Amdahl's law, and this makes this distributed reader-writer
lock to effectively scale. That's even better with my Distributed
sequential lock , because it scales even better than the distributed
reader-writer mutex of Dmitry Vyukov.
 
 
 
Hope you have understood my architect way of thinking.
 
 
 
Thank you for your time.
 
 
 
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Dec 08 05:25PM -0800

Hello,
 
 
I think i have made a mistake about the follwing reader-writer lock of
Joe Duffy:
 
http://joeduffyblog.com/2009/02/20/a-more-scalable-readerwriter-lock-and-a-bit-less-harsh-consideration-of-the-idea/
 
 
Since a cache-line transfer between cores is not at all expensive,
so the serial part of the Amdahl's law for this reader-writer
above will still be smaller with more and more cores,
so this will make this reader-writer lock above of Joe Duffy
really scalable on read-mostly scenarios.
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Dec 08 03:16PM -0800

Hello,
 
 
I must say that we have to be carefull, because i have just
read the following webpage about a more scalable reader/writer lock
by an architect at microsoft called Joe Duffy... but you have to be
carefull because this reader/writer lock is not really scalable,
it is a garbage, and i will think as an architect and explain to you why...
 
Here is the webpage, and my explanation follows...
 
 
http://joeduffyblog.com/2009/02/20/a-more-scalable-readerwriter-lock-and-a-bit-less-harsh-consideration-of-the-idea/
 
 
So look inside the EnterWriteLock() of the reader/writer above,
you will notice that it is first executing Interlocked.Exchange(ref
m_writer, 1), that means it is atomicaly making m_writer equal 1,
so that to block readers from entering the reader section,
but this is garbage, cause look after that he is doing this:
 
for (int i = 0; i < m_readers.Length; i++)
 
while (m_readers[i].m_taken != 0) sw.SpinOnce();
 
 
So after making m_writers equal 1 so that to block the readers,
he is transfering many cache-lines between cores, and this is really
expensive and it will make the serial part of the Amdahl's law bigger
and bigger when more and more cores will be used , so this will not
scale, so it is garbage.
 
The Dmitry Vyukov distributed reader-writer mutex doesn't have this
weakness, because look at the source code here:
 
http://www.1024cores.net/home/lock-free-algorithms/reader-writer-problem/distributed-reader-writer-mutex
 
 
Because he is doing this on the distr_rw_mutex_wrlock() side:
 
for (i = 0; i != mtx->proc_count; i += 1)
pthread_rwlock_wrlock(&mtx->cell[i].mtx);
 
 
So we have to be smart here and notice with me that as the "i" counter
variable goes from 0 to proc_count, the reader side will still be
allowed to enter and to enter again the reader section on scenarios with
more contention, so in contrast with the above reader-writer lock, this
part of the distributed lock is not counted as only a serial part of the
Amdahl's law, because it allows also the reader threads to enter
and to enter again the reader section, so this part contains a parallel
part of the Amdahl's law, tand this makes this distributed reader-writer
lock to effectively scale. That's theven better with my Distributed
sequential lock , because it scales even better than the distributed
reader-writer mutex of Dmitry Vyukov.
 
 
 
Hope you have understood my architect way of thinking.
 
 
 
Thank you for your time.
 
 
 
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Dec 08 03:25PM -0800

I correct some typos , please read again...
 
Hello,
 
 
I must say that we have to be carefull, because i have just
read the following webpage about a more scalable reader/writer lock
by an architect at microsoft called Joe Duffy... but you have to be
carefull because this reader/writer lock is not really scalable,
it is a garbage, and i will think as an architect and explain to you why...
 
Here is the webpage, and my explanation follows...
 
 
http://joeduffyblog.com/2009/02/20/a-more-scalable-readerwriter-lock-and-a-bit-less-harsh-consideration-of-the-idea/
 
 
So look inside the EnterWriteLock() of the reader/writer above,
you will notice that it is first executing Interlocked.Exchange(ref
m_writer, 1), that means it is atomicaly making m_writer equal 1,
so that to block readers from entering the reader section,
but this is garbage, cause look after that he is doing this:
 
for (int i = 0; i < m_readers.Length; i++)
 
while (m_readers[i].m_taken != 0) sw.SpinOnce();
 
 
So after making m_writers equal 1 so that to block the readers,
he is transfering many cache-lines between cores, and this is really
expensive and it will make the serial part of the Amdahl's law bigger
and bigger when more and more cores will be used , so this will not
scale, so it is garbage.
 
The Dmitry Vyukov distributed reader-writer mutex doesn't have this
weakness, because look at the source code here:
 
http://www.1024cores.net/home/lock-free-algorithms/reader-writer-problem/distributed-reader-writer-mutex
 
 
Because he is doing this on the distr_rw_mutex_wrlock() side:
 
for (i = 0; i != mtx->proc_count; i += 1)
pthread_rwlock_wrlock(&mtx->cell[i].mtx);
 
 
So we have to be smart here and notice with me that as the "i" counter
variable goes from 0 to proc_count, the reader side will still be
allowed to enter and to enter again the reader section on scenarios with
more contention, so in contrast with the above reader-writer lock, this
part of the distributed lock is not counted as only a serial part of the
Amdahl's law, because it allows also the reader threads to enter
and to enter again the reader section, so this part contains a parallel
part of the Amdahl's law, and this makes this distributed reader-writer
lock to effectively scale. That's even better with my Distributed
sequential lock , because it scales even better than the distributed
reader-writer mutex of Dmitry Vyukov.
 
 
 
Hope you have understood my architect way of thinking.
 
 
 
Thank you for your time.
 
 
 
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Dec 08 12:13PM -0800

Hello,
 
 
In my previous posts about intelligence, i was speaking about the
"hardware", i mean about the "brain" , so i was speaking about genetics
and intelligence, i was not speaking about knowledge.
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Dec 08 10:58AM -0800

Hello,
 
I have thought yesterday about what is intelligence,
so here is my theory about intelligence , and i think it
is correct...
 
The way the neurals in the brain are interconnected does construct a
different kind of architecture in the brain in each person, the way
neurals in the brain are interconnected can also make the brain very
fast, and can make the searching mechanism of the brain faster than
the software or hardware Breadth first search and deep first search ,
the neurals in the brain are interconnected in a way that can construct
more faster architectures in the brain that can use different
and faster searching technics than Breadth first search and deep first
search ,but they use different ways of interconnecting the neurals of
the brain so that the solution of the problem can come very fast, and
this is what we call intelligence. But there is a weakness in human,
cause i think IQ tests are constituted with a kind of small problems,
small in term of "complexity", and because the problems of
IQ tests are small in complexity, this permit intelligent human to solve
them faster because the way neurals in the brain are interconnected can
also make the brain very fast, and can make the searching mechanism of
the brain faster than the software or hardware Breadth first search and
deep first search , but if you make IQ tests
more complex than they are , i don't think intelligent person will
succeed there tests with high scores, or if you hide some parts of
the problem , this will make the problem hard for an intelligent human
to solve it.
 
 
This is my theory about what is intelligence, and i think it is correct.
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Dec 08 11:55AM -0800

On 12/8/2014 10:59 AM, Ramine wrote:
> are interconnected can also make the brain very fast, and can make
> the searching mechanism of the brain faster than the software or
>hardware Breadth first search and deep first search
 
 
 
I have said that IQ tests are small in term of complexity, what i wanted
to say is that there complexity is relative to the complexity of all the
problems that are found in our universe and found in theory etc.... i
don't wanted to say that there complexity is relative to the capacity
of the human that want to pass those IQ tests.
 
 
Hope you have understood.
 
 
 
Thank you,
Amine Moulay Ramdane.
Ramine <ramine@1.1>: Dec 08 12:01PM -0800

Hello,
 
I was speaking about the "hardware", i mean about the "brain" ,
so i was speaking about genetics and intelligence, i was not speaking
about knowledge.
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: