Wednesday, June 9, 2021

Digest for comp.programming.threads@googlegroups.com - 8 updates in 7 topics

Amine Moulay Ramdane <aminer68@gmail.com>: Jun 08 03:08PM -0700

Hello,
 
 
More philosophy about my IQ and my personality..
 
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
You have to know me more, i have passed two certified IQ tests and they
are standardized IQ tests, and i have scored in both of them well above
115 IQ, so this means that i am highly smart, but i think that those standardized IQ tests are not testing well my smartness, since my brain's thinking is "strategic" thinking and it is genetical in me, i mean that my way of "naturally" thinking is strategic thinking, so as you will also notice that my inventions below are strategic thinking, i will give you an example about strategic thinking so that you understand:
 
So i am noticing the "complexity" of learning and understanding, but since i am strategic thinking and it is genetical in me, i am strategically making the learning and understanding efficiently much less complex so that i learn quickly and i understand quickly, it is by logical analogy like when you do a fight with a much stronger person than you, so you can do the fight stupidly by the brute force way,
or you can search for the "weaknesses" of this much stronger person
than you and find it or them and take advantage of it and win the fight against him, that's also being strategic.
 
More about the energy efficiency of Transactional memory and more..
 
We have to take into account in software engineering the Energy-delay product (EDP), where energy the total energy consumption of cores and delay is the amount of time for executing applications, but i invite you
to read the following paper about energy efficiency of Transactional memory and notice that the TCC-based HTM only reaches on an average 19% reduction in energy consumption and notice that for the DVFS strategy it only reaches a reduction of approximately 59% in EDP, so that's not good, read here in the following paper so that to notice it:
 
https://computersystemslaboratory.files.wordpress.com/2015/02/baldassin2012sbac.pdf
 
Also I have just read the following PhD paper from arabs PhD reseachers, it is also about energy efficiency of Transactional memory, here it is:
 
Techniques for Enhancing the Efficiency of Transactional Memory Systems
 
http://kth.diva-portal.org/smash/get/diva2:1258335/FULLTEXT02.pdf
 
And i think it is the best known energy efficient algorithm for
Transactional memory, but i think it is not good, since
look at how for 64 cores the Beta parameter can be 16 cores,
so i think i am smart and i have just invented a much more energy efficient algorithm that solves the problem once and for all.
 
Here is my just new invention of a scalable algorithm and my other new inventions..
 
I have just read the following PhD paper about the invention that we call counting networks and they are better than Software combining trees:
 
Counting Networks
 
http://people.csail.mit.edu/shanir/publications/AHS.pdf
 
And i have read the following PhD paper:
 
http://people.csail.mit.edu/shanir/publications/HLS.pdf
 
So as you are noticing they are saying in the conclusion that:
 
"Software combining trees and counting networks which are the only techniques we observed to be truly scalable"
 
But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. So you have to be careful with the actual counting networks algorithm that is not generally scalable.
 
More philosophy about my kind of works..
 
I just written the following:
 
--
 
More philosophy about my way of doing..
 
You have to know me more, since i have just posted about Computer Science vs Software Engineering, but i am not like
Computer Science or Software Engineering, because i am an inventor
of many software scalable algorithms and algorithms, and i have invented some powerful software tools, so my way of doing is being innovative and creative and inventive, so i am like a PhD researcher, and i am writing some books about my inventions and about my powerful tools etc.
 
--
 
I will give an example of how i am an inventive and creative, i have just read the following book (and of other books like it) of a PhD researcher about operational research and capacity planning, here they are:
 
Performance by Design: Computer Capacity Planning by Example
 
https://www.amazon.ca/Performance-Design-Computer-Capacity-Planning/dp/0130906735
 
So i have just found that there methodologies of those PhD researchers for the E-Business service don't work, because they are doing calculations for a given arrival rate that is statistically and empirically measured from the behavior of customers, but i think that it is not correct, so i am being inventive and i have come with my new methodology that fixes the arrival rate from the data by using an hyperexponential service distribution(and it is mathematical) since it is also good for Denial-of-Service (DoS) attacks and i will write a powerful book about it that will teach my new methodology and i will also explain the mathematics behind it and i will sell it, and my new methodology will work for cloud computing and for computer servers.
 
More about my inventions of scalable algorithms..
 
More precision about my new inventions of scalable algorithms..
 
And look at my below powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair
and Starvation-free and costless on the reader side
(that means with no atomics and with no fences on the reader side), they use sys_membarrier expedited on Linux and FlushProcessWriteBuffers() on windows, and if you look at the source code of my LW_Fast_RWLockX.pas
and Fast_RWLockX.pas inside the zip file, you will notice that in Linux they call two functions that are membarrier1() and membarrier2(), the membarrier1() registers the process's intent to use MEMBARRIER_CMD_PRIVATE_EXPEDITED and membarrier2() executes a memory barrier on each running thread belonging to the same process as the calling thread.
 
Read more here to understand:
 
https://man7.org/linux/man-pages/man2/membarrier.2.html
 
Here is my new powerful inventions of scalable algorithms..
 
I have just updated my powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair
and Starvation-free and costless on the reader side (that means with no atomics and with no fences on the reader side), they use sys_membarrier expedited on Linux and FlushProcessWriteBuffers() on windows, and now they work with both Linux and Windows, and i think my inventions are really smart, since read the following PhD researcher,
he says the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
So as you have just noticed he says the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
So i think that my above powerful inventions of scalable reader-writer locks are efficient and FIFO fair and Starvation-free.
 
LW_Fast_RWLockX that is a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does spin-wait, and also Fast_RWLockX a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does not spin-wait, but waits on my SemaMonitor, so it is energy efficient.
 
You can read about them and download them from my website here:
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
About the Linux sys_membarrier() expedited and the windows FlushProcessWriteBuffers()..
 
I have just read the following webpage:
 
https://lwn.net/Articles/636878/
 
And it is interesting and it says:
 
---
 
Results in liburcu:
 
Operations in 10s, 6 readers, 2 writers:
 
memory barriers in reader: 1701557485 reads, 3129842 writes
signal-based scheme: 9825306874 reads, 5386 writes
sys_membarrier expedited: 6637539697 reads, 852129 writes
sys_membarrier non-expedited: 7992076602 reads, 220 writes
 
---
 
 
Look at how "sys_membarrier expedited" is powerful.
 
Cache-coherency protocols do not use IPIs, and as a user-space level developer you do not care about IPIs at all. One is most interested in the cost of cache-coherency itself. However, Win32 API provides a function that issues IPIs to all processors (in the affinity mask of the current process) FlushProcessWriteBuffers(). You can use it to investigate the cost of IPIs.
 
When i do simple synthetic test on a dual core machine I've obtained following numbers.
 
420 cycles is the minimum cost of the FlushProcessWriteBuffers() function on issuing core.
 
1600 cycles is mean cost of the FlushProcessWriteBuffers() function on issuing core.
 
1300 cycles is mean cost of the FlushProcessWriteBuffers() function on remote core.
 
Note that, as far as I understand, the function issues IPI to remote core, then remote core acks it with another IPI, issuing core waits for ack IPI and then returns.
 
And the IPIs have indirect cost of flushing the processor pipeline.
 
More about WaitAny() and WaitAll() and more..
 
Look at the following concurrency abstractions of Microsoft:
 
https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitany?view=netframework-4.8
 
https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitall?view=netframework-4.8
 
They look like the following WaitForAny() and WaitForAll() of Delphi, here they are:
 
http://docwiki.embarcadero.com/Libraries/Sydney/en/System.Threading.TTask.WaitForAny
 
http://docwiki.embarcadero.com/Libraries/Sydney/en/System.Threading.TTask.WaitForAll
 
So the WaitForAll() is easy and i have implemented it in my Threadpool engine that scales very well and that i have invented, you can read my html tutorial inside The zip file of it to know how to do it, you can download it from my website here:
 
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
 
And about the WaitForAny(), you can also do it using my SemaMonitor,
and i will soon give you an example of how to do it, and you can download my SemaMonitor invention from my website here:
 
https://sites.google.com/site/scalable68/semacondvar-semamonitor
 
Here is my other just new software inventions..
 
I have just looked at the source code of the following multiplatform pevents
 
https://github.com/neosmart/pevents
 
And notice that the WaitForMultipleEvents() is implemented with pthread
but it is not scalable on multicores. So i have just invented a WaitForMultipleObjects() that looks like the Windows WaitForMultipleObjects() and that is fully "scalable" on multicores and that works on Windows and Linux and MacOSX and that is blocking when waiting for the objects as WaitForMultipleObjects(), so it doesn't consume CPU cycles when waiting and it works with events and futures and tasks.
 
Here is my other just new software inventions..
 
I have just invented a fully "scalable" on multicores latch and a
fully scalable on multicores thread barrier, they are really powerful.
 
Read about the latches and thread barriers that are not scalable on
multicores of C++ here:
 
https://www.modernescpp.com/index.php/latches-and-barriers
 
 
Here is my other software inventions:
 
 
More about my scalable math Linear System Solver Library...
 
As you have just noticed i have just spoken about my Linear System Solver Library(read below), right now it scales very well, but i will
soon make it "fully" scalable on multicores using one of my scalable algorithm that i have invented and i will extend it much more to also support efficient scalable on multicores matrix operations and more, and since it will come with one of my scalable algorithms that i have invented, i think i will sell it too.
 
More about mathematics and about scalable Linear System Solver Libraries and more..
 
I have just noticed that a software architect from Austria
called Michael Rabatscher has designed and implemented MrMath Library that is also a parallelized Library:
 
Here he is:
 
https://at.linkedin.com/in/michael-rabatscher-6821702b
 
And here is his MrMath Library for Delphi and Freepascal:
 
https://github.com/mikerabat/mrmath
 
But i think that he is not so smart, and i think i am smart like
a genius and i say that his MrMath Library is not scalable on multicores, and notice that the Linear System Solver of his MrMath Library is not scalable on multicores too, and notice that the threaded matrix operations of his Library are not scalable on multicores too, this is why i have invented a scalable on multicores Conjugate Gradient Linear System Solver Library for C++ and Delphi and Freepascal, and here it is, read about it in my following thoughts(also i will soon extend more my Library to support scalable matrix operations):
 
About SOR and Conjugate gradient mathematical methods..
 
I have just looked at SOR(Successive Overrelaxation Method),
and i think it is much less powerful than Conjugate gradient method,
read the following to notice it:
 
COMPARATIVE PERFORMANCE OF THE CONJUGATE GRADIENT AND SOR METHODS
FOR COMPUTATIONAL THERMAL HYDRAULICS
 
https://inis.iaea.org/collection/NCLCollectionStore/_Public/19/055/19055644.pdf?r=1&r=1
 
 
This is why i have implemented in both C++ and Delphi my Parallel Conjugate Gradient Linear System Solver Library that scales very well, read my following thoughts about it to understand more:
 
 
About the convergence properties of the conjugate gradient method
 
The conjugate gradient method can theoretically be viewed as a direct method, as it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix, in the absence of round-off error. However, the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, and the exact solution is never obtained. Fortunately, the conjugate gradient method can be used as an iterative method as it provides monotonically improving approximations to the exact solution, which may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number κ(A) of the system matrix A: the larger is κ(A), the slower the improvement.
 
Read more here:
 
http://pages.stat.wisc.edu/~wahba/stat860public/pdf1/cj.pdf
 
 
So i think my Conjugate Gradient Linear System Solver Library
that scales very well is still very useful, read about it
in my writing below:
 
Read the following interesting news:
 
The finite element method finds its place in games
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fhpc.developpez.com%2Factu%2F288260%2FLa-methode-des-elements-finis-trouve-sa-place-dans-les-jeux-AMD-propose-la-bibliotheque-FEMFX-pour-une-simulation-en-temps-reel-des-deformations%2F
 
But you have to be aware that finite element method uses Conjugate Gradient Method for Solution of Finite Element Problems, read here to notice it:
 
Conjugate Gradient Method for Solution of Large Finite Element Problems on CPU and GPU
 
https://pdfs.semanticscholar.org/1f4c/f080ee622aa02623b35eda947fbc169b199d.pdf
 
 
This is why i have also
Bonita Montero <Bonita.Montero@gmail.com>: Jun 09 10:25AM +0200

You're manic and manic pople score a bit higher than the
average in IQ-tests but they also make much more mistakes.
Amine Moulay Ramdane <aminer68@gmail.com>: Jun 08 01:35PM -0700

Hello,
 
 
More about the energy efficiency of Transactional memory and more..
 
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
 
We have to take into account in software engineering the Energy-delay product (EDP), where energy the total energy consumption of cores and delay is the amount of time for executing applications, but i invite you
to read the following paper about energy efficiency of Transactional memory and notice that the TCC-based HTM only reaches on an average 19% reduction in energy consumption and notice that for the DVFS strategy it only reaches a reduction of approximately 59% in EDP, so that's not good, read here in the following paper so that to notice it:
 
https://computersystemslaboratory.files.wordpress.com/2015/02/baldassin2012sbac.pdf
 
Also I have just read the following PhD paper from arabs PhD reseachers, it is also about energy efficiency of Transactional memory, here it is:
 
Techniques for Enhancing the Efficiency of Transactional Memory Systems
 
http://kth.diva-portal.org/smash/get/diva2:1258335/FULLTEXT02.pdf
 
And i think it is the best known energy efficient algorithm for
Transactional memory, but i think it is not good, since
look at how for 64 cores the Beta parameter can be 16 cores,
so i think i am smart and i have just invented a much more energy efficient algorithm that solves the problem once and for all.
 
Here is my just new invention of a scalable algorithm and my other new inventions..
 
I have just read the following PhD paper about the invention that we call counting networks and they are better than Software combining trees:
 
Counting Networks
 
http://people.csail.mit.edu/shanir/publications/AHS.pdf
 
And i have read the following PhD paper:
 
http://people.csail.mit.edu/shanir/publications/HLS.pdf
 
So as you are noticing they are saying in the conclusion that:
 
"Software combining trees and counting networks which are the only techniques we observed to be truly scalable"
 
But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. So you have to be careful with the actual counting networks algorithm that is not generally scalable.
 
More philosophy about my kind of works..
 
I just written the following:
 
--
 
More philosophy about my way of doing..
 
You have to know me more, since i have just posted about Computer Science vs Software Engineering, but i am not like
Computer Science or Software Engineering, because i am an inventor
of many software scalable algorithms and algorithms, and i have invented some powerful software tools, so my way of doing is being innovative and creative and inventive, so i am like a PhD researcher, and i am writing some books about my inventions and about my powerful tools etc.
 
--
 
I will give an example of how i am an inventive and creative, i have just read the following book (and of other books like it) of a PhD researcher about operational research and capacity planning, here they are:
 
Performance by Design: Computer Capacity Planning by Example
 
https://www.amazon.ca/Performance-Design-Computer-Capacity-Planning/dp/0130906735
 
So i have just found that there methodologies of those PhD researchers for the E-Business service don't work, because they are doing calculations for a given arrival rate that is statistically and empirically measured from the behavior of customers, but i think that it is not correct, so i am being inventive and i have come with my new methodology that fixes the arrival rate from the data by using an hyperexponential service distribution(and it is mathematical) since it is also good for Denial-of-Service (DoS) attacks and i will write a powerful book about it that will teach my new methodology and i will also explain the mathematics behind it and i will sell it, and my new methodology will work for cloud computing and for computer servers.
 
More about my inventions of scalable algorithms..
 
More precision about my new inventions of scalable algorithms..
 
And look at my below powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair
and Starvation-free and costless on the reader side
(that means with no atomics and with no fences on the reader side), they use sys_membarrier expedited on Linux and FlushProcessWriteBuffers() on windows, and if you look at the source code of my LW_Fast_RWLockX.pas
and Fast_RWLockX.pas inside the zip file, you will notice that in Linux they call two functions that are membarrier1() and membarrier2(), the membarrier1() registers the process's intent to use MEMBARRIER_CMD_PRIVATE_EXPEDITED and membarrier2() executes a memory barrier on each running thread belonging to the same process as the calling thread.
 
Read more here to understand:
 
https://man7.org/linux/man-pages/man2/membarrier.2.html
 
Here is my new powerful inventions of scalable algorithms..
 
I have just updated my powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair
and Starvation-free and costless on the reader side (that means with no atomics and with no fences on the reader side), they use sys_membarrier expedited on Linux and FlushProcessWriteBuffers() on windows, and now they work with both Linux and Windows, and i think my inventions are really smart, since read the following PhD researcher,
he says the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
So as you have just noticed he says the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
So i think that my above powerful inventions of scalable reader-writer locks are efficient and FIFO fair and Starvation-free.
 
LW_Fast_RWLockX that is a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does spin-wait, and also Fast_RWLockX a lightweight scalable Reader-Writer Mutex that uses a technic that looks like Seqlock without looping on the reader side like Seqlock, and this has permitted the reader side to be costless, it is fair and it is of course Starvation-free and it does not spin-wait, but waits on my SemaMonitor, so it is energy efficient.
 
You can read about them and download them from my website here:
 
https://sites.google.com/site/scalable68/scalable-rwlock
 
About the Linux sys_membarrier() expedited and the windows FlushProcessWriteBuffers()..
 
I have just read the following webpage:
 
https://lwn.net/Articles/636878/
 
And it is interesting and it says:
 
---
 
Results in liburcu:
 
Operations in 10s, 6 readers, 2 writers:
 
memory barriers in reader: 1701557485 reads, 3129842 writes
signal-based scheme: 9825306874 reads, 5386 writes
sys_membarrier expedited: 6637539697 reads, 852129 writes
sys_membarrier non-expedited: 7992076602 reads, 220 writes
 
---
 
 
Look at how "sys_membarrier expedited" is powerful.
 
Cache-coherency protocols do not use IPIs, and as a user-space level developer you do not care about IPIs at all. One is most interested in the cost of cache-coherency itself. However, Win32 API provides a function that issues IPIs to all processors (in the affinity mask of the current process) FlushProcessWriteBuffers(). You can use it to investigate the cost of IPIs.
 
When i do simple synthetic test on a dual core machine I've obtained following numbers.
 
420 cycles is the minimum cost of the FlushProcessWriteBuffers() function on issuing core.
 
1600 cycles is mean cost of the FlushProcessWriteBuffers() function on issuing core.
 
1300 cycles is mean cost of the FlushProcessWriteBuffers() function on remote core.
 
Note that, as far as I understand, the function issues IPI to remote core, then remote core acks it with another IPI, issuing core waits for ack IPI and then returns.
 
And the IPIs have indirect cost of flushing the processor pipeline.
 
More about WaitAny() and WaitAll() and more..
 
Look at the following concurrency abstractions of Microsoft:
 
https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitany?view=netframework-4.8
 
https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitall?view=netframework-4.8
 
They look like the following WaitForAny() and WaitForAll() of Delphi, here they are:
 
http://docwiki.embarcadero.com/Libraries/Sydney/en/System.Threading.TTask.WaitForAny
 
http://docwiki.embarcadero.com/Libraries/Sydney/en/System.Threading.TTask.WaitForAll
 
So the WaitForAll() is easy and i have implemented it in my Threadpool engine that scales very well and that i have invented, you can read my html tutorial inside The zip file of it to know how to do it, you can download it from my website here:
 
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
 
And about the WaitForAny(), you can also do it using my SemaMonitor,
and i will soon give you an example of how to do it, and you can download my SemaMonitor invention from my website here:
 
https://sites.google.com/site/scalable68/semacondvar-semamonitor
 
Here is my other just new software inventions..
 
I have just looked at the source code of the following multiplatform pevents
 
https://github.com/neosmart/pevents
 
And notice that the WaitForMultipleEvents() is implemented with pthread
but it is not scalable on multicores. So i have just invented a WaitForMultipleObjects() that looks like the Windows WaitForMultipleObjects() and that is fully "scalable" on multicores and that works on Windows and Linux and MacOSX and that is blocking when waiting for the objects as WaitForMultipleObjects(), so it doesn't consume CPU cycles when waiting and it works with events and futures and tasks.
 
Here is my other just new software inventions..
 
I have just invented a fully "scalable" on multicores latch and a
fully scalable on multicores thread barrier, they are really powerful.
 
Read about the latches and thread barriers that are not scalable on
multicores of C++ here:
 
https://www.modernescpp.com/index.php/latches-and-barriers
 
 
Here is my other software inventions:
 
 
More about my scalable math Linear System Solver Library...
 
As you have just noticed i have just spoken about my Linear System Solver Library(read below), right now it scales very well, but i will
soon make it "fully" scalable on multicores using one of my scalable algorithm that i have invented and i will extend it much more to also support efficient scalable on multicores matrix operations and more, and since it will come with one of my scalable algorithms that i have invented, i think i will sell it too.
 
More about mathematics and about scalable Linear System Solver Libraries and more..
 
I have just noticed that a software architect from Austria
called Michael Rabatscher has designed and implemented MrMath Library that is also a parallelized Library:
 
Here he is:
 
https://at.linkedin.com/in/michael-rabatscher-6821702b
 
And here is his MrMath Library for Delphi and Freepascal:
 
https://github.com/mikerabat/mrmath
 
But i think that he is not so smart, and i think i am smart like
a genius and i say that his MrMath Library is not scalable on multicores, and notice that the Linear System Solver of his MrMath Library is not scalable on multicores too, and notice that the threaded matrix operations of his Library are not scalable on multicores too, this is why i have invented a scalable on multicores Conjugate Gradient Linear System Solver Library for C++ and Delphi and Freepascal, and here it is, read about it in my following thoughts(also i will soon extend more my Library to support scalable matrix operations):
 
About SOR and Conjugate gradient mathematical methods..
 
I have just looked at SOR(Successive Overrelaxation Method),
and i think it is much less powerful than Conjugate gradient method,
read the following to notice it:
 
COMPARATIVE PERFORMANCE OF THE CONJUGATE GRADIENT AND SOR METHODS
FOR COMPUTATIONAL THERMAL HYDRAULICS
 
https://inis.iaea.org/collection/NCLCollectionStore/_Public/19/055/19055644.pdf?r=1&r=1
 
 
This is why i have implemented in both C++ and Delphi my Parallel Conjugate Gradient Linear System Solver Library that scales very well, read my following thoughts about it to understand more:
 
 
About the convergence properties of the conjugate gradient method
 
The conjugate gradient method can theoretically be viewed as a direct method, as it produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix, in the absence of round-off error. However, the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, and the exact solution is never obtained. Fortunately, the conjugate gradient method can be used as an iterative method as it provides monotonically improving approximations to the exact solution, which may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by the condition number κ(A) of the system matrix A: the larger is κ(A), the slower the improvement.
 
Read more here:
 
http://pages.stat.wisc.edu/~wahba/stat860public/pdf1/cj.pdf
 
 
So i think my Conjugate Gradient Linear System Solver Library
that scales very well is still very useful, read about it
in my writing below:
 
Read the following interesting news:
 
The finite element method finds its place in games
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fhpc.developpez.com%2Factu%2F288260%2FLa-methode-des-elements-finis-trouve-sa-place-dans-les-jeux-AMD-propose-la-bibliotheque-FEMFX-pour-une-simulation-en-temps-reel-des-deformations%2F
 
But you have to be aware that finite element method uses Conjugate Gradient Method for Solution of Finite Element Problems, read here to notice it:
 
Conjugate Gradient Method for Solution of Large Finite Element Problems on CPU and GPU
 
https://pdfs.semanticscholar.org/1f4c/f080ee622aa02623b35eda947fbc169b199d.pdf
 
 
This is why i have also designed and implemented my Parallel Conjugate Gradient Linear System Solver library that scales very well,
here it is:
 
My Parallel C++ Conjugate Gradient Linear System Solver Library
that scales very well version 1.76 is here..
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware that scales very well, and it contains also a Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware that scales very well.
 
Sparse linear system solvers are ubiquitous in high performance computing (HPC) and often are the most computational intensive parts in scientific computing codes. A few of the many applications relying on sparse linear solvers include fusion energy simulation, space weather simulation, climate modeling, and environmental modeling, and finite element method, and large-scale reservoir simulations to enhance oil recovery by the oil and gas industry.
 
Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this.
Amine Moulay Ramdane <aminer68@gmail.com>: Jun 08 11:33AM -0700

Hello,
 
 
More philosophy about Globalization..
 
I think that globalization is a good thing, but you have to be smart
and know how to take advantage of it, so i invite you to read
the following about the positive effects of Globalization on developing countries such as:
 
1- Globalization has permitted developed nations to invest in less
developed nations which has lead to creation of jobs for poor people.
 
2- The health and education system in developing countries has benefited
in a positive way due to the contribution of globalization.
 
And about Globalization and the unequal distribution of income
read my below thoughts of my philosophy about human existence where i am also speaking about it.
 
So i invite you to read the following article to notice it:
 
The effects of Globalization on developing countries developing countries
 
Read more here:
 
https://medium.com/@BonillaXM/the-effects-of-globalization-on-developing-countries-1e465257c400
 
And you have to know how to take advantage of exponential growth of capitalism, so i invite you to read the following article
that speaks about it:
 
Capitalism switches from linear to exponential growth
 
http://parisinnovationreview.com/articles-en/capitalism-switches-from-linear-to-exponential-growth
 
And i invite you to read my thoughts of my philosophy here:
 
https://groups.google.com/g/alt.culture.morocco/c/YZSYxV41-qI
 
Also i invite you to read more of my thoughts of my philosophy here:
 
https://groups.google.com/g/comp.programming.threads/c/OjDTCDiawJw
 
Also i invite you to read more of my thoughts of my philosophy here:
 
https://groups.google.com/g/alt.culture.morocco/c/ftf3lx5Rzxo
 
Also you can read more about my thoughts of my philosophy about human smartness in the following web link:
 
https://groups.google.com/g/alt.culture.morocco/c/Wzf6AOl41xs
 
More philosophy about the human existence..
 
I also invite you to read my following philosophy in my following beautiful webpage about human existence where i am also speaking
about Globalization and the unequal distribution of income:
 
https://scalable68.godaddysites.com/f/my-philosophy-about-human-existence
 
 
Thank you,
Amine Moulay Ramdane.
Amine Moulay Ramdane <aminer68@gmail.com>: Jun 08 08:52AM -0700

Hello,
 
 
 
More philosophy about FBI..
 
I am a white arab and i think i am smart since i have also invented many scalable algorithms and algorithms..
 
As you will notice i am posting below a video about the early years of FBI, that included dealing with gangsters and organized crime, then locating draft dodgers and deserters, fighting the spread of communism in the United States, dealing with civil rights issues, and tackling international crime, but i invite you to read more about the new FBI and what are the 5 reasons why the FBI is so effective:
 
Read more here:
 
5 Reasons Why the FBI Is So Effective
 
https://www.waldenu.edu/online-bachelors-programs/bs-in-criminal-justice/resource/five-reasons-why-the-fbi-is-so-effective
 
New FBI executives reflect Bureau's push for diversity
 
Read more here:
 
https://www.fbi.gov/news/stories/new-executives-reflect-fbis-push-for-diversity-051221
 
FBI tricks criminal groups into using messaging app, makes 800 arrests
 
Read more here:
 
https://www.cnet.com/news/fbi-tricks-criminal-groups-into-using-messaging-app-arrests-800-of-them/
 
800 criminals arrested in the biggest ever law enforcement operation against encrypted communication
 
Read more here:
 
https://www.europol.europa.eu/newsroom/news/800-criminals-arrested-in-biggest-ever-law-enforcement-operation-against-encrypted-communication
 
I invite you to look at the following interesting video
about FBI that fights criminality, since you need good security
to have a good spirit and to have good order that is also a requirement for having a good economy:
 
The FBI -- You Can't Get Away With It
 
https://www.youtube.com/watch?v=ni2SP6GAA1o
 
 
Thank you,
Amine Moulay Ramdane.
Amine Moulay Ramdane <aminer68@gmail.com>: Jun 08 08:18AM -0700

Hello,
 
 
New FBI executives reflect Bureau's push for diversity
 
Read more here:
 
https://www.fbi.gov/news/stories/new-executives-reflect-fbis-push-for-diversity-051221
 
FBI tricks criminal groups into using messaging app, makes 800 arrests
 
Read more here:
 
https://www.cnet.com/news/fbi-tricks-criminal-groups-into-using-messaging-app-arrests-800-of-them/
 
800 criminals arrested in the biggest ever law enforcement operation against encrypted communication
 
Read more here:
 
https://www.europol.europa.eu/newsroom/news/800-criminals-arrested-in-biggest-ever-law-enforcement-operation-against-encrypted-communication
 
I invite you to look at the following interesting video
about FBI that fights criminality, since you need good security
to have a good spirit and to have good order that is also a requirement for having a good economy:
 
The FBI -- You Can't Get Away With It
 
https://www.youtube.com/watch?v=ni2SP6GAA1o
 
 
Thank you,
Amine Moulay Ramdane.
Amine Moulay Ramdane <aminer68@gmail.com>: Jun 08 07:42AM -0700

Hello,
 
 
FBI tricks criminal groups into using messaging app, makes 800 arrests
 
Read more here:
 
https://www.cnet.com/news/fbi-tricks-criminal-groups-into-using-messaging-app-arrests-800-of-them/
 
800 criminals arrested in the biggest ever law enforcement operation against encrypted communication
 
Read more here:
 
https://www.europol.europa.eu/newsroom/news/800-criminals-arrested-in-biggest-ever-law-enforcement-operation-against-encrypted-communication
 
I invite you to look at the following interesting video
about FBI that fights criminality, since you need good security
to have a good spirit and to have good order that is also a requirement for having a good economy:
 
The FBI -- You Can't Get Away With It
 
https://www.youtube.com/watch?v=ni2SP6GAA1o
 
 
Thank you,
Amine Moulay Ramdane.
Amine Moulay Ramdane <aminer68@gmail.com>: Jun 08 06:24AM -0700

Hello,
 
 
Deep Mind Assert Reinforcement Learning Could Solve Artificial General Intelligence
 
Read more here:
 
https://www.nextbigfuture.com/2021/06/deep-mind-assert-reinforcement-learning-could-solve-artificial-general-intelligence.html
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: