- Read again.. - 2 Updates
- Here is my new invention - 1 Update
- Intel Is Building the World's Most Powerful Supercomputer - 1 Update
- About the algorithms of my ParallelSort library - 1 Update
- My StringTree was updated to version 1.52 - 1 Update
- My Parallel Sort Library was updated to version 3.62 - 1 Update
Horizon68 <horizon@horizon.com>: Mar 25 11:55AM -0700 Hello, Read the following: My Parallel Sort Library was updated to version 3.62 I have enhanced it and i have tested it thoroughly and i think it is now stable and fast and it scales more. You have to be aware that my parallel sort library uses my own parallel algorithm, take a look at the source code and you will notice that it is powerful since both the sort and the merge are parallelized. You can download it from: https://sites.google.com/site/scalable68/parallel-sort-library Thank you, Amine Moulay Ramdane. |
Horizon68 <horizon@horizon.com>: Mar 25 05:01PM -0700 Hello... As you have noticed i am a white arab, and i am a more serious computer programmer and i am also an "inventor" of many scalable algorithms and there implementations, here is my new invention: I have just "invented" a "highly" scalable Parallel Sort library for shared memory architectures that supports a highly scalable Mergesort and a highly scalable Quicksort and a highly scalable Heapsort. I think that this new invention of mine is the "best" one in shared memory architectures because it also uses my other inventions of a fully scalable FIFO queue and a fully scalable Threadpool. So i think i will sell it to Google or to Microsoft or to Embarcadero or such big software companies. Also i have have just invented the following Parallel Sort Library that supports a new and more efficient Parallel merge algorithms that improves the worst-case performance: My algorithm of finding the median of Parallel merge of my Parallel Sort Library that you will find here in my website: https://sites.google.com/site/scalable68/parallel-sort-library Is O(log(min(|A|,|B|))), where |A| is the size of A, since the binary search is performed within the smaller array and is O(lgN). But the new algorithm of finding the median of parallel merge of my Parallel Sort Library is O(log(|A|+|B|)), which is slightly worse. With further optimizations the order was reduced to O(log(2*min(|A|,|B|))), which is better, but is 2X more work, since both arrays may have to be searched. All algorithms are logarithmic. Two binary searches were necessary to find an even split that produced two equal or nearly equal halves. Luckily, this part of the merge algorithm is not performance critical. So, more effort can be spent looking for a better split. This new algorithm in the parallel merge balances the recursive binary tree of the divide-and-conquer and improve the worst-case performance of parallel merge sort. So stay tuned ! Thank you, Amine Moulay Ramdane. |
Horizon68 <horizon@horizon.com>: Mar 25 05:00PM -0700 Hello.. As you have noticed i am a white arab, and i am a more srious computer programmer and i am also an "inventor" of many scalable algorithms and there implementations, here is my new invention: I have just "invented" a "highly" scalable Parallel Sort library for shared memory architectures that supports a highly scalable Mergesort and a highly scalable Quicksort and a highly scalable Heapsort. I think that this new invention of mine is the "best" one in shared memory architectures because it also uses my other inventions of a fully scalable FIFO queue and a fully scalable Threadpool. So i think i will sell it to Google or to Microsoft or to Embarcadero or such big software companies. Also i have have just invented the following Parallel Sort Library that supports a new and more efficient Parallel merge algorithms that improves the worst-case performance: My algorithm of finding the median of Parallel merge of my Parallel Sort Library that you will find here in my website: https://sites.google.com/site/scalable68/parallel-sort-library Is O(log(min(|A|,|B|))), where |A| is the size of A, since the binary search is performed within the smaller array and is O(lgN). But the new algorithm of finding the median of parallel merge of my Parallel Sort Library is O(log(|A|+|B|)), which is slightly worse. With further optimizations the order was reduced to O(log(2*min(|A|,|B|))), which is better, but is 2X more work, since both arrays may have to be searched. All algorithms are logarithmic. Two binary searches were necessary to find an even split that produced two equal or nearly equal halves. Luckily, this part of the merge algorithm is not performance critical. So, more effort can be spent looking for a better split. This new algorithm in the parallel merge balances the recursive binary tree of the divide-and-conquer and improve the worst-case performance of parallel merge sort. So stay tuned ! Thank you, Amine Moulay Ramdane. |
Horizon68 <horizon@horizon.com>: Mar 25 03:12PM -0700 Hello, Read this: Intel Is Building the World's Most Powerful Supercomputer Read more here: https://singularityhub.com/2019/03/25/intel-is-building-the-worlds-most-powerful-supercomputer/#sm.0000jl57td15saeevxu0p87884x2f Thank you, Amine Moulay Ramdane. |
Horizon68 <horizon@horizon.com>: Mar 25 12:31PM -0700 Hello.. About the algorithms of my ParallelSort library: My algorithm of my ParallelSort libray of finding the median in Parallel merge is O(log(min(|A|,|B|))), where |A| is the size of A, since the binary search is performed within the smaller array and is O(lgN). The idea: Let's assume we want to merge sorted arrays X and Y. Select X[m] median element in X. Elements in X[ .. m-1] are less than or equal to X[m]. Using binary search find index k of the first element in Y greater than X[m]. Thus Y[ .. k-1] are less than or equal to X[m] as well. Elements in X[m+1..] are greater than or equal to X[m] and Y[k .. ] are greater. So merge(X, Y) can be defined as concat(merge(X[ .. m-1], Y[ .. k-1]), X[m], merge(X[m+1.. ], Y[k .. ])) now we can recursively in parallel do merge(X[ .. m-1], Y[ .. k-1]) and merge(X[m+1 .. ], Y[k .. ]) and then concat results. I will enhance the above algorithm of finding the median with a new efficient algorithm that is O(log(2*min(|A|,|B|))) that is 2X more work, since both arrays may have to be searched. Two binary searches are necessary to find an even split that produced two equal or nearly equal halves. Luckily, this part of the merge algorithm is not performance critical. So, more effort can be spent looking for a better split. This new algorithm in the parallel merge balances the recursive binary tree of the divide-and-conquer and improve the worst-case performance of my ParallelSort library. So stay tuned my ParallelSort library with this new algorithm above is coming soon ! Thank you, Amine Moulay Ramdane. |
Horizon68 <horizon@horizon.com>: Mar 25 12:03PM -0700 Hello.. My StringTree was updated to version 1.52 I have enhanced it more and i think that it is stable and fast. You can download it from: https://sites.google.com/site/scalable68/stringtree Thank you, Amine Moulay Ramdane. |
Horizon68 <horizon@horizon.com>: Mar 25 11:52AM -0700 Hello... My Parallel Sort Library was updated to version 3.62 I have enhanced it and i have tested it thoroughly and i think it is now much more stable and fast and it scales more. You have to be aware that my parallel sort library uses my own parallel algorithm, take a look at the source code and you will notice that it is powerful since both the sort and the merge are parallelized. You can download it from: https://sites.google.com/site/scalable68/parallel-sort-library Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment