Friday, March 22, 2019

Digest for comp.programming.threads@googlegroups.com - 2 updates in 2 topics

Horizon68 <horizon@horizon.com>: Mar 21 12:55PM -0700

Hello..
 
 
Read again, i correct a last typo..
 
I have just "invented" a "highly" scalable Parallel Sort library
for shared memory architecture that supports highly scalable Mergesort,
Quicksort and Heapsort. I think it is the "best" one in shared memory
architecture because it also uses my other inventions of a fully
scalable FIFO queue and a fully scalable Threadpool. So i think i will
sell it to Google or to Microsoft or to Embarcadero or such big software
companies.
 
Also i have have just invented the following Parallel Sort Library that
supports a new and more efficient Parallel merge algorithms that
improves the worst-case performance:
 
My algorithm of Parallel merge of my Parallel Sort Library that you
will find here in my website:
 
https://sites.google.com/site/scalable68/parallel-sort-library
 
Is O(log(min(|A|,|B|))), where |A| is the size of A, since the binary
search is performed within the smaller array and is O(lgN). But the new
Parallel merge algorithm iof my Parallel Sort Library is
O(log(|A|+|B|)), which is slightly worse. With further optimizations the
order was reduced to O(log(2*min(|A|,|B|))), which is better, but is 2X
more work, since both arrays may have to be searched. All algorithms are
logarithmic. Two binary searches were necessary to find an even split
that produced two equal or nearly equal halves. Luckily, this part of
the merge algorithm is not performance critical. So, more effort can be
spent looking for a better split. This new algorithm in the parallel
merge balances the recursive binary tree of the divide-and-conquer and
improve the worst-case performance of parallel merge sort.
 
 
So stay tuned !
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Mar 21 12:49PM -0700

Hello..
 
 
I have just "invented" a "highly" scalable Parallel Sort library
for shared memory architecture that supports highly scalable Mergesort
Quicksort and Heapsort. I think it is the "best" one in shared memory
architecture because it also uses my other inventions of a fully
scalable FIFO queue and a fully scalable Threadpool. So i think i will
sell it to Google or to Microsoft or to Embarcadero or such big software
companies.
 
Also i have have just invented the following Parallel Sort Library that
supports a new and more efficient Parallel merge algorithms that
improves the worst-case performance:
 
My algorithm of Parallel merge of my Parallel Sort Library that you
will find here in my website:
 
https://sites.google.com/site/scalable68/parallel-sort-library
 
Is O(log(min(|A|,|B|))), where |A| is the size of A, since the binary
search is performed within the smaller array and is O(lgN). But the new
Parallel merge algorithm iof my Parallel Sort Library is
O(log(|A|+|B|)), which is slightly worse. With further optimizations the
order was reduced to O(log(2*min(|A|,|B|))), which is better, but is 2X
more work, since both arrays may have to be searched. All algorithms are
logarithmic. Two binary searches were necessary to find an even split
that produced two equal or nearly equal halves. Luckily, this part of
the merge algorithm is not performance critical. So, more effort can be
spent looking for a better split. This new algorithm in the parallel
merge balances the recursive binary tree of the divide-and-conquer and
improve the worst-case performance of parallel merge sort.
 
 
So stay tuned !
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: