- What is computing for me ? - 1 Update
- I correct some typos because i write fast, read again.. - 1 Update
- What about the today computing ? - 1 Update
- I correct some typos because i write fast, read again - 1 Update
- You have to understand my work.. - 1 Update
- More precision, read again - 1 Update
- About portability of my projects - 1 Update
Sky89 <Sky89@sky68.com>: Aug 05 01:34AM -0400 Hello.. What is computing for me ? It is first finding an efficient "abstraction" and being able to hide or reduce "complexity", and also Loose coupling is also like hiding or reducing complexity, i am always doing it on my software projects, and to be able to do it more efficiently you have to be more smart, also i am always looking to reduce efficiently the serial part of the parallel program, i am calculating it by CPU cycles and this part of my work is like "assembler", and this to be able to "predict" more correctly scalability, look for example at my Parallel Compression Library or my Parallel archiver here: https://sites.google.com/site/scalable68/parallel-compression-library https://sites.google.com/site/scalable68/parallel-archiver Notice that it says: - It's NUMA-aware and NUMA efficient on windows (it parallelizes the memory reads and writes on NUMA nodes) and - It minimizes efficiently the contention so that it scales well and Now it uses only two threads that do the IO (and they are not contending) so that it reduces at best the contention, so that it scales well. and - Now it supports processor groups on windows, so that it can use more than 64 logical processors and it scales well. And notice that it says the following: I have done a quick calculation of the scalability prediction for my Parallel Compression Library, and i think it's good: it can scale beyond 100X on NUMA systems. The Dynamic Link Libraries for Windows and Dynamic shared libraries for Linux of the compression and decompression algorithms of my Parallel Compression Library and for my Parallel archiver were compiled from C with the optimization level 2 enabled, so they are very fast. So as you noticed i am making my Parallel Compression Library and Parallel archiver efficient. Also computing for me is also finding a good and more efficient way to ensure reliability and stability, and this part of my work has been enhanced more by my "experience" in computing and by implementing more serious softwares. Also computing for me is also making my software portable, this why i said before the following: About portability of my software projects I have thought more, and as you have noticed i have written Intel assembler routines for 32 bit and 64 bit for atomically incrementing and and for atomically CompareExchange etc. so now they are working with x86 AMD and Intel processors for 32 bit and 64 bit, but i will soon make my Delphi and FreePascal and C++ libraries portable to the other CPUs like ARM(for Android) etc. for that i will use the following Delphi methods for Delphi: http://docwiki.embarcadero.com/Libraries/XE8/en/System.SyncObjs.TInterlocked.CompareExchange and http://docwiki.embarcadero.com/Libraries/Tokyo/en/System.SyncObjs.TInterlocked.Exchange And I will use the same functions that you find inside FreePascal, here they are: https://www.freepascal.org/docs-html/rtl/system/interlockedexchange64.html and https://www.freepascal.org/docs-html/rtl/system/interlockedexchange.html and https://www.freepascal.org/docs-html/rtl/system/interlockedcompareexchange.html and https://www.freepascal.org/docs-html/rtl/system/interlockedcompareexchange64.html I will use them inside my scalable lock that is called scalable MLock that i have "invented", so that it will be portable, here it is: https://sites.google.com/site/scalable68/scalable-mlock And when my scalable MLock will become portable on Delphi and FreePascal i will port with it all my other libraries that uses atomically increment and decrement etc., so my libraries will become portable to the other CPUs like ARM for Android etc., so i think you will be happy with my work. Also what is computing for me ? Read the rest to understand better my work: What about the today computing ? You have to know me more.. I am not thinking becoming an expert of "coding".. I am not like that.. Because I am an "inventor", and i have invented many scalable algorithms and there implementations to do better HPC(high performance computing), i am thinking NUMA systems, and i am thinking "scalability" on manycores and multicores and on NUMA systems etc. this is my way of "thinking", and as a proof look at my new scalable reference counting with efficient support for weak references, here it is: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references As you have noticed it is "fully" scalable reference counting, so this is HPC(high performance computing) , and i have implemented this scalable algorithm that i have "invented" in Delphi and on the Delphi mode of FreePascal , so that to make Delphi and FreePascal "much" better, and notice with me that you will not find it on C++ or Rust. This is my way of thinking , i am "inventing" scalable algorithms and there implementations. And i said the following: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. So i will ask you ? where will you find my Threadpool with priorities that scales very well? and where you will find my Parallel ForEach and Parallel For with priorities that scales very well ? You will not find them on C++ and you will not find them on Rust, because i have "invented" them, because i am an "inventor", and this is my way of thinking. Here is my powerful Threadpool with priorities that scales very well, read about it and download it from here: https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well It is a very powerful Threadpool, because: More precision about my efficient Threadpool that scales very well, my Threadpool is much more scalable than the one of Microsoft, in the workers side i am using scalable counting networks to distribute on the many queues or stacks, so it is scalable on the workers side, on the consumers side i am also using lock striping to be able to scale very well, so it is scalable on those parts, on the other part that is work stealing, i am using scalable counting networks, so globally it scales very well, and since work stealing is "rare" so i think that my efficient Threadpool that scales very well is really powerful, and it is much more optimized and the scalable counting networks eliminate false sharing, and it works with Windows and Linux. Read the rest: Read the rest: You have to understand my work.. I have invented many scalable algorithms and there implementations, here is some of them that i have "invented": 1- Scalable Threadpools that are powerful 2- Scalable RWLocks of different sorts. 3- Scalable reference counting with efficient support for weak references 4- Scalable FIFO queues that are node-based and array-based. 5- My Scalable Varfiler 6- Scalable Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware, and also a Scalable Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware. 7- Scalable MLock that is a scalable Lock. 8- Scalable SeqlockX And there is also "many" other scalable algorithms that i have "invented". You can find some of my scalable algorithms and there implementations in Delphi and FreePascal and C++ on my website here: https://sites.google.com/site/scalable68/ What i am doing by "inventing" many scalable algorithms and there implementations, is wanting to make "Delphi" much better and making FreePascal on the "Delphi" mode much better, my scalable algorithms and there implementations are like HPC(high performance computing, and as you have noticed i said also: You will ask why have i invented many scalable algorithms and there implementations? because also my work will permit us also to "revolutionise" science and technology because it is HPC(high performance computing), this is why i will also sell some of my scalable algorithms and there implementations to companies such as Google or Microsoft or Embarcadero. Also HPC has revolutionised the way science is performed. Supercomputing is needed for processing sophisticated computational models able to simulate the cellular structure and functionalities of the brain. This should enable us to better understand how our brain works and how we can cope with diseases such as those linked to ageing and to understand more about HPC, read more here: https://ec.europa.eu/digital-single-market/en/blog/why-do-supercomputers-matter-your-everyday-life So i will "sell" some of my scalable algorithms and there implementations to Google or to Microsoft or to Embarcadero. I will also enhance my Parallel archiver and my Parallel compression Library that are powerful and that work with both C++Builder and Delphi and to perhaps sell them to Embarcadero that sells Delphi and C++Builder. Also I will implement soon a "scalable" Parallel For and a Parallel ForEach.. This why i said before that: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. And my next step soon is also to make my Delphi and FreePascal and C++ Libraries portable to other CPUs like ARM etc. because currently they work on x86 AMD and Intel CPUs. And my next step soon is also to make my "scalable" RWLocks NUMA-aware and efficient on NUMA. This is was some parts of my everyday computing.. Thank you, Amine Moulay Ramdane. |
Sky89 <Sky89@sky68.com>: Aug 04 10:49PM -0400 Hello.. I correct some typos because i write fast, read again.. What about the today computing ? You have to know me more.. I am not thinking becoming an expert of "coding".. I am not like that.. Because I am an "inventor", and i have invented many scalable algorithms and there implementations to do better HPC(high performance computing), i am thinking NUMA systems, and i am thinking "scalability" on manycores and multicores and on NUMA systems etc. this is my way of "thinking", and as a proof look at my new scalable reference counting with efficient support for weak references, here it is: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references As you have noticed it is "fully" scalable reference counting, so this is HPC(high performance computing) , and i have implemented this scalable algorithm that i have "invented" in Delphi and on the Delphi mode of FreePascal , so that to make Delphi and FreePascal "much" better, and notice with me that you will not find it on C++ or Rust. This is my way of thinking , i am "inventing" scalable algorithms and there implementations. And i said the following: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. So i will ask you ? where will you find my Threadpool with priorities that scales very well? and where you will find my Parallel ForEach and Parallel For with priorities that scales very well ? You will not find them on C++ and you will not find them on Rust, because i have "invented" them, because i am an "inventor", and this is my way of thinking. Here is my powerful Threadpool with priorities that scales very well, read about it and download it from here: https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well It is a very powerful Threadpool, because: More precision about my efficient Threadpool that scales very well, my Threadpool is much more scalable than the one of Microsoft, in the workers side i am using scalable counting networks to distribute on the many queues or stacks, so it is scalable on the workers side, on the consumers side i am also using lock striping to be able to scale very well, so it is scalable on those parts, on the other part that is work stealing, i am using scalable counting networks, so globally it scales very well, and since work stealing is "rare" so i think that my efficient Threadpool that scales very well is really powerful, and it is much more optimized and the scalable counting networks eliminate false sharing, and it works with Windows and Linux. Read the rest: Read the rest: You have to understand my work.. I have invented many scalable algorithms and there implementations, here is some of them that i have "invented": 1- Scalable Threadpools that are powerful 2- Scalable RWLocks of different sorts. 3- Scalable reference counting with efficient support for weak references 4- Scalable FIFO queues that are node-based and array-based. 5- My Scalable Varfiler 6- Scalable Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware, and also a Scalable Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware. 7- Scalable MLock that is a scalable Lock. 8- Scalable SeqlockX And there is also "many" other scalable algorithms that i have "invented". You can find some of my scalable algorithms and there implementations in Delphi and FreePascal and C++ on my website here: https://sites.google.com/site/scalable68/ What i am doing by "inventing" many scalable algorithms and there implementations, is wanting to make "Delphi" much better and making FreePascal on the "Delphi" mode much better, my scalable algorithms and there implementations are like HPC(high performance computing, and as you have noticed i said also: You will ask why have i invented many scalable algorithms and there implementations? because also my work will permit us also to "revolutionise" science and technology because it is HPC(high performance computing), this is why i will also sell some of my scalable algorithms and there implementations to companies such as Google or Microsoft or Embarcadero. Also HPC has revolutionised the way science is performed. Supercomputing is needed for processing sophisticated computational models able to simulate the cellular structure and functionalities of the brain. This should enable us to better understand how our brain works and how we can cope with diseases such as those linked to ageing and to understand more about HPC, read more here: https://ec.europa.eu/digital-single-market/en/blog/why-do-supercomputers-matter-your-everyday-life So i will "sell" some of my scalable algorithms and there implementations to Google or to Microsoft or to Embarcadero. I will also enhance my Parallel archiver and my Parallel compression Library that are powerful and that work with both C++Builder and Delphi and to perhaps sell them to Embarcadero that sells Delphi and C++Builder. Also I will implement soon a "scalable" Parallel For and a Parallel ForEach.. This why i said before that: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. And my next step soon is also to make my Delphi and FreePascal and C++ Libraries portable to other CPUs like ARM etc. because currently they work on x86 AMD and Intel CPUs. And my next step soon is also to make my "scalable" RWLocks NUMA-aware and efficient on NUMA. Thank you, Amine Moulay Ramdane. |
Sky89 <Sky89@sky68.com>: Aug 04 10:35PM -0400 Hello.. What about the today computing ? You have to know me more.. I am not thinking becoming an expert of "coding".. I am not like that.. Because I am an "inventor", and i have invented many scalable algorithms and there implementations to do better HPC(high performance computing), i am thinking NUMA systems, and i am thinking "scalablility" on manycores and multicores and on NUMA systems etc. this is my way of "thinking", and as a proof look at my new scalable reference counting with efficient support for weak references, here it is: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references As you have noticed it is "fully" scalable reference counting, so this is HPC(high performance computing) , and i have implemented this scalable algorithm that i have "invented" in Delphi and on the Delphi mode of FreePascal , so that to make Delphi and FreePascal "much" better, and notice with me that you will not find it on C++ or Rust. This is my way of thinking , i am "inventing" scalable algorithms and there implementations. And i said the following: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. So i will ask you ? where will you find my Threadpool with priorities that scales very well? and where you will find my Parallel ForEach and Parallel For with priorities that scales very well ? You will not find them on C++ and you will not find them on Rust, because i have "invented" them, because i am an "inventor", and this is my way of thinking. Here is my powerful Threadpool with priorities that scales very well, read about it and download it from here: https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well It is a very powerful Threadpool, because: More precision about my efficient Threadpool that scales very well, my Threadpool is much more scalable than the one of Microsoft, in the workers side i am using scalable counting networks to distribute on the many queues or stacks, so it is scalable on the workers side, on the consumers side i am also using lock striping to be able to scale very well, so it is scalable on those parts, on the other part that is work stealing, i am using scalable counting networks, so globally it scales very well, and since work stealing is "rare" so i think that my efficient Threadpool that scales very well is really powerful, and it is much more optimized and the scalable counting networks eliminate false sharing, and it works with Windows and Linux. Read the rest: Read the rest: You have to understand my work.. I have invented many scalable algorithms and there implementations, here is some of them that i have "invented": 1- Scalable Threadpools that are powerful 2- Scalable RWLocks of different sorts. 3- Scalable reference counting with efficient support for weak references 4- Scalable FIFO queues that are node-based and array-based. 5- My Scalable Varfiler 6- Scalable Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware, and also a Scalable Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware. 7- Scalable MLock that is a scalable Lock. 8- Scalable SeqlockX And there is also "many" other scalable algorithms that i have "invented". You can find some of my scalable algorithms and there implementations in Delphi and FreePascal and C++ on my website here: https://sites.google.com/site/scalable68/ What i am doing by "inventing" many scalable algorithms and there implementations, is wanting to make "Delphi" much better and making FreePascal on the "Delphi" mode much better, my scalable algorithms and there implementations are like HPC(high performance computing, and as you have noticed i said also: You will ask why have i invented many scalable algorithms and there implementations? because also my work will permit us also to "revolutionise" science and technology because it is HPC(high performance computing), this is why i will also sell some of my scalable algorithms and there implementations to companies such as Google or Microsoft or Embarcadero. Also HPC has revolutionised the way science is performed. Supercomputing is needed for processing sophisticated computational models able to simulate the cellular structure and functionalities of the brain. This should enable us to better understand how our brain works and how we can cope with diseases such as those linked to ageing and to understand more about HPC, read more here: https://ec.europa.eu/digital-single-market/en/blog/why-do-supercomputers-matter-your-everyday-life So i will "sell" some of my scalable algorithms and there implementations to Google or to Microsoft or to Embarcadero. I will also enhance my Parallel archiver and my Parallel compression Library that are powerful and that work with both C++Builder and Delphi and to perhaps sell them to Embarcadero that sells Delphi and C++Builder. Also I will implement soon a "scalable" Parallel For and a Parallel ForEach.. This why i said before that: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. And my next step soon is also to make my Delphi and FreePascal and C++ Libraries portable to other CPUs like ARM etc. because currently they work on x86 AMD and Intel CPUs. And my next step soon is also to make my "scalable" RWLocks NUMA-aware and efficient on NUMA. Thank you, Amine Moulay Ramdane. |
Sky89 <Sky89@sky68.com>: Aug 04 09:33PM -0400 Hello.. I correct some typos because i write fast, read again: You have to understand my work.. I have invented many scalable algorithms and there implementations, here is some of them that i have "invented": 1- Scalable Threadpools that are powerful 2- Scalable RWLocks of different sorts. 3- Scalable reference counting with efficient support for weak references 4- Scalable FIFO queues that are node-based and array-based. 5- My Scalable Varfiler 6- Scalable Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware, and also a Scalable Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware. 7- Scalable MLock that is a scalable Lock. 8- Scalable SeqlockX And there is also "many" other scalable algorithms that i have "invented". You can find some of my scalable algorithms and there implementations in Delphi and FreePascal and C++ on my website here: https://sites.google.com/site/scalable68/ What i am doing by "inventing" many scalable algorithms and there implementations, is wanting to make "Delphi" much better and making FreePascal on the "Delphi" mode much better, my scalable algorithms and there implementations are like HPC(high performance computing, and as you have noticed i said also: You will ask why have i invented many scalable algorithms and there implementations? because also my work will permit us also to "revolutionise" science and technology because it is HPC(high performance computing), this is why i will also sell some of my scalable algorithms and there implementations to companies such as Google or Microsoft or Embarcadero. Also HPC has revolutionised the way science is performed. Supercomputing is needed for processing sophisticated computational models able to simulate the cellular structure and functionalities of the brain. This should enable us to better understand how our brain works and how we can cope with diseases such as those linked to ageing and to understand more about HPC, read more here: https://ec.europa.eu/digital-single-market/en/blog/why-do-supercomputers-matter-your-everyday-life So i will "sell" some of my scalable algorithms and there implementations to Google or to Microsoft or to Embarcadero. I will also enhance my Parallel archiver and my Parallel compression Library that are powerful and that work with both C++Builder and Delphi and to perhaps sell them to Embarcadero that sells Delphi and C++Builder. Also I will implement soon a "scalable" Parallel For and a Parallel ForEach.. This why i said before that: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. And my next step soon is also to make my Delphi and FreePascal and C++ Libraries portable to other CPUs like ARM etc. because currently they work on x86 AMD and Intel CPUs. And my next step soon is also to make my "scalable" RWLocks NUMA-aware and efficient on NUMA. Thank you, Amine Moulay Ramdane. |
Sky89 <Sky89@sky68.com>: Aug 04 09:20PM -0400 Hello.. You have to understand my work.. I have invented many scalable algorithms and there implementations, here is some of them that i have "invented": 1- Scalable Threadpools that are powerful 2- Scalable RWLocks of different sorts. 3- Scalable reference counting with efficient support for weak references 4- Scalable FIFO queues that are node-based and array-based. 5- My Scalable Varfiler 6- Scalable Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware, and also a Scalable Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware. 7- Scalable MLock that is a scalable Lock. 8- Scalable SeqlockX And there is also "many" other scalable algorithms that i have "invented". You can find some of my scalable algorithms and there implementations in Delphi and FreePascal and C++ on my website here: https://sites.google.com/site/scalable68/ What i am doing by "inventing" many scalable algorithms and there implementations, is wanting to make "Delphi" much better and making FreePascal on the "Delphi" mode much better, my scalable algorithms and there implementations are like HPC(high performance computing, and as you have noticed i said also: You will ask why have i invented many scalable algorithms and there implementations? because also my work will permit us also to "revolutionise" science and technology because it is HPC(high performance computing), this is why i will also sell some of my scalable algorithms and there implementations to companies such as Google or Microsoft or Embarcadero. Also HPC has revolutionised the way science is performed. Supercomputing is needed for processing sophisticated computational models able to simulate the cellular structure and functionalities of the brain. This should enable us to better understand how our brain works and how we can cope with diseases such as those linked to ageing and to understand more about HPC, read more here: https://ec.europa.eu/digital-single-market/en/blog/why-do-supercomputers-matter-your-everyday-life So i will "sell" some of my scalable algorithms and there implementations to Google or to Microsoft or to Embarcadero. I will also enhance my Parallel archiver and my Parallel compression Library that are powerful and that work with both C++Builder and Delphi to perhaps Embarcadero that sells Delphi and C++Builder. Also I will implement soon a "scalable" Parallel For and a Parallel ForEach.. This why i said before that: "I think that this Parallel ForEach and ParallelFor are like futulities, because they don't bring "enough" high level abstraction to consider them interesting, because i think my Threadpool with priorities that scales very well is capable of easily emulating Parallel ForEach with "priorities" and ParallelFor with "priorities" that scale very well, so no need to implement Parallel ForEach or Parallel For." But to be "nicer", i think i will soon implement both Parallel ForEach with "priorities" that scales very well and ParallelFor with "priorities" that scales very well using my Threadpool with priorities that scales very well, and they will be integrated as methods with my Threadpool with priorities that scales very well, so that you will be happy. And my next step soon is also to make my Delphi and FreePascal and C++ Libraries portable to other CPUs like ARM etc. because currently they work on x86 AMD and Intel CPUs. And my next step soon is also to make my "scalable" RWLocks NUMA-aware and efficient on NUMA. Thank you, Amine Moulay Ramdane. |
Sky89 <Sky89@sky68.com>: Aug 04 06:57PM -0400 Hello.. More precision, read again: About portability of my software projects I have thought more, and as you have noticed i have written Intel assembler routines for 32 bit and 64 bit for atomically incrementing and and for atomically CompareExchange etc. so now they are working with x86 AMD and Intel processors for 32 bit and 64 bit, but i will soon make my Delphi and FreePascal and C++ libraries portable the other CPUs like ARM(for Android) etc. for that i will use the following Delphi methods for Delphi: http://docwiki.embarcadero.com/Libraries/XE8/en/System.SyncObjs.TInterlocked.CompareExchange and http://docwiki.embarcadero.com/Libraries/Tokyo/en/System.SyncObjs.TInterlocked.Exchange And I will use the same functions that you find inside FreePascal, here they are: https://www.freepascal.org/docs-html/rtl/system/interlockedexchange64.html and https://www.freepascal.org/docs-html/rtl/system/interlockedexchange.html and https://www.freepascal.org/docs-html/rtl/system/interlockedcompareexchange.html and https://www.freepascal.org/docs-html/rtl/system/interlockedcompareexchange64.html I will use them inside my scalable lock that is called scalable MLock that i have "invented", so that it will be portable, here it is: https://sites.google.com/site/scalable68/scalable-mlock And when my scalable MLock will become portable on Delphi and FreePascal i will port with it all my other libraries that uses atomically increment and decrement etc., so my libraries will become portable on the other CPUs like ARM for Android etc., so i think you will be happy with my work. Stay tunned ! Thank you, Amine Moulay Ramdane. |
Sky89 <Sky89@sky68.com>: Aug 04 06:28PM -0400 Hello.. About portability of my projects I have thought more, and as you have noticed i have written Intel assembler routines for 32 bit and 64 bit for atomically incrementing and and for atomically CompareExchange etc. so now they are working with x86 AMD and Intel processors for 32 bit and 64 bit, but i will soon make my Delphi and FreePascal and C++ libraries portable the other CPUs like ARM(for Android) etc. for that i will use the following Delphi method for Delphi: http://docwiki.embarcadero.com/Libraries/XE8/en/System.SyncObjs.TInterlocked.CompareExchange And I will use the same functions that you find inside FreePascal, here they are: https://www.freepascal.org/docs-html/rtl/system/interlockedcompareexchange.html and also the following for 64 bit: https://www.freepascal.org/docs-html/rtl/system/interlockedcompareexchange64.html I will use them inside my scalable lock that is called scalable MLock that i have "invented", so that it will be portable, here it is: https://sites.google.com/site/scalable68/scalable-mlock And when my scalable MLock will become portable on Delphi and FreePascal i will port with it all my other libraries that uses atomically increment and decrement etc., so my libraries will become portable on the other CPUs like Android etc., so i think you will be happy with my work. Stay tunned ! Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment