Sunday, May 7, 2017

Digest for comp.programming.threads@googlegroups.com - 5 updates in 4 topics

Bonita Montero <Bonita.Montero@gmail.com>: May 07 10:52AM +0200

> ..., a cache-line transfer is around 400 CPU cycles on x86, ...
 
LOL.
rami17 <rami17@rami17.net>: May 06 09:15PM -0400

Hello......
 
 
I was thinking about Transactional memory more..
 
Here is the problem of optimistic transactional memory:
 
If there is more conflicts between reads and writes you have to
rollback etc. and this will be less energy efficient than pessimistic
locking mechanisms and it will be less faster.
 
 
So i think that my C++ Synchronization objects library is still useful..
 
You can download it from:
 
https://sites.google.com/site/aminer68/c-synchronization-objects-library
 
 
 
 
Thank you,
Amine Moulay Ramdane.
rami17 <rami17@rami17.net>: May 06 05:43PM -0400

Hello......
 
I have implemented a Parallel hybrid divide-and-conquer merge algorithm
that performs 0.9-5.8 times better than sequential merge, on a quad-core
processor, with larger arrays outperforming by over 5 times. Parallel
processing combined with a hybrid algorithm approach provides a powerful
high performance result.
 
The idea:
 
Let's assume we want to merge sorted arrays X and Y. Select X[m] median
element in X. Elements in X[ .. m-1] are less than or equal to X[m].
Using binary search find index k of the first element in Y greater than
X[m]. Thus Y[ .. k-1] are less than or equal to X[m] as well. Elements
in X[m+1..] are greater than or equal to X[m] and Y[k .. ] are greater.
So merge(X, Y) can be defined as
concat(merge(X[ .. m-1], Y[ .. k-1]), X[m], merge(X[m+1.. ], Y[k .. ]))
now we can recursively in parallel do merge(X[ .. m-1], Y[ .. k-1]) and
merge(X[m+1 .. ], Y[k .. ]) and then concat results.
 
And now ParallelSort library gives better performance and scalability.
 
You can download my powerful Parallel Sort Library from:
 
https://sites.google.com/site/aminer68/parallel-sort-library
 
Thank you,
Amine Moulay Ramdane.
rami17 <rami17@rami17.net>: May 06 04:39PM -0400

Hello,
 
I have implemented my Universal Scalability Law for Delphi and FreePascal..
 
Where do you use it ?
 
You use it for example to optimize more the cost/performance on
multicores and manycores.
 
With -nlr option means that the problem will be solved with the
mathematical nonlinear regression using the simplex method as a
minimization, if you don't specify -nlr, the problem will be solved by
default by the mathematical polynomial regression, and since it uses
regression , you can use it for example to test your system on many more
cores with just a few points, and after that using regression it searchs
for the cost/performance that is optimal for you.
 
Please read more about my Universal Scalability Law for Delphi and
FreePascal, it comes with a graphical and a command-line program.
 
You can read about it and download it from here:
 
https://sites.google.com/site/aminer68/universal-scalability-law-for-delphi-and-freepascal
 
Thank you,
Amine Moulay Ramdane.
rami17 <rami17@rami17.net>: May 06 04:40PM -0400

Hello.....
 
I have implemented my Universal Scalability Law for Delphi and FreePascal..
 
Where do you use it ?
 
You use it for example to optimize more the cost/performance on
multicores and manycores.
 
With -nlr option means that the problem will be solved with the
mathematical nonlinear regression using the simplex method as a
minimization, if you don't specify -nlr, the problem will be solved by
default by the mathematical polynomial regression, and since it uses
regression , you can use it for example to test your system on many more
cores with just a few points, and after that using regression it searchs
for the cost/performance that is optimal for you.
 
Please read more about my Universal Scalability Law for Delphi and
FreePascal, it comes with a graphical and a command-line program.
 
You can read about it and download it from here:
 
https://sites.google.com/site/aminer68/universal-scalability-law-for-delphi-and-freepascal
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: