Friday, August 16, 2019

Digest for comp.programming.threads@googlegroups.com - 22 updates in 4 topics

aminer68@gmail.com: Aug 15 01:58PM -0700

Hello,
 
 
My Scalable RWLocks were updated to version 4.32
 
 
I have just used the Windows FlushProcessWriteBuffers() API in my new scalable LW_Fast_RWLockX and in my new scalable Fast_RWLockX
 
FlushProcessWriteBuffers() API does the following:
 
- Implicitly execute full memory barrier on all other processors.
- Generates an interprocessor interrupt (IPI) to all processors that
are part of the current process affinity.
- Uses IPI to "synchronously" signal all processors.
- It guarantees the visibility of write operations performed on one
processor to the other processors.
- Supported since Windows Vista and Windows Server 2008
 
If we investigate the cost of IPIs, when i do simple synthetic test on a Quad core machine I've obtained following numbers.
 
420 cycles is the minimum cost of the FlushProcessWriteBuffers()
function on issuing core.
 
1600 cycles is mean cost of the FlushProcessWriteBuffers() function on
issuing core.
 
1300 cycles is mean cost of the FlushProcessWriteBuffers() function on
remote core.
 
Note that, as far as I understand, the function issues IPI to remote
core, then remote core acks it with another IPI, issuing core waits for
ack IPI and then returns.
 
And the IPIs have indirect cost of flushing the processor pipeline.
 
My new scalable LW_Fast_RWLockX and in my scalable Fast_RWLockX
are starvation-free and fair, and if the write section and the read section are of the of same size and with 0.1% of writes, it will scale to 1333x, and with the same scenario my other scalable RWLocks will scale also to 1333x.
 
 
You can download my scalable RWLocks inventions that are powerful from:
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Aug 15 01:23PM -0700

Hello,
 
 
My Scalable RWLocks were updated to version 4.32
 
 
I have just used the Windows FlushProcessWriteBuffers() API in my new scalable LW_Fast_RWLockX and in my new scalable Fast_RWLockX
 
FlushProcessWriteBuffers() API does the following:
 
- Implicitly execute full memory barrier on all other processors.
- Generates an interprocessor interrupt (IPI) to all processors that
are part of the current process affinity.
- Uses IPI to "synchronously" signal all processors.
- It guarantees the visibility of write operations performed on one
processor to the other processors.
- Supported since Windows Vista and Windows Server 2008
 
If we investigate the cost of IPIs, when i do simple synthetic test on a Quad core machine I've obtained following numbers.
 
420 cycles is the minimum cost of the FlushProcessWriteBuffers()
function on issuing core.
 
1600 cycles is mean cost of the FlushProcessWriteBuffers() function on
issuing core.
 
1300 cycles is mean cost of the FlushProcessWriteBuffers() function on
remote core.
 
Note that, as far as I understand, the function issues IPI to remote
core, then remote core acks it with another IPI, issuing core waits for
ack IPI and then returns.
 
And the IPIs have indirect cost of flushing the processor pipeline.
 
My new scalable LW_Fast_RWLockX and in my scalable Fast_RWLockX
are starvation-free and fair, and if the write section and the read section are of the of same size , it will scale to 1333x, and with the same scenario my other scalable RWLocks will scale also to 1333x.
 
 
You can download my scalable RWLocks inventions that are powerful from:
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
 
 
Thank you,
Amine Moulay Ramdane.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:26PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
Elephant Man <conanospamic@gmail.com>: Aug 15 07:27PM

Article d'annulation posté via Nemo.
aminer68@gmail.com: Aug 15 10:48AM -0700

Hello,
 
 
My portable and efficient implementation of a future in Delphi and FreePascal was updated to version 1.23
 
I think it is working correctly now.
 
 
You can download it from:
 
https://sites.google.com/site/scalable68/a-portable-and-efficient-implementation-of-a-future-in-delphi-and-freepascal
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: