Monday, July 13, 2020

Digest for comp.programming.threads@googlegroups.com - 5 updates in 5 topics

aminer68@gmail.com: Jul 13 04:47AM -0700

Hello,
 
 
More precision about Wait-free and Lock-free and about OneFile..
 
I am a white arab, and i think i am smart like a "genius",
since i have invented many scalable algorithms and there implementations, and today i will make you understand more:
 
Look at this new invention of a PhD researcher:
 
OneFile - The world's first wait-free Software Transactional Memory
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html
 
And look carefully at this video about it by its inventor:
 
Pedro Ramalhete — Wait-free data structures and wait-free transactions
 
https://www.youtube.com/watch?v=oDfr9w9p8XY
 
So i think that this invention is not so smart, because
i have just read its source code here:
 
https://github.com/pramalhe/OneFile/blob/master/stms/OneFileWF.hpp
 
And it says the following:
 
Progress condition of the readers transactions: wait-free (bounded by the number of threads + MAX_READ_TRIES)
 
and:
 
The progress condition of the writers transactions: wait-free (bounded
by the number of threads)
 
1- So as you are noticing it has a weakness and it is the same as the Lock-free one, and it is that it wastes CPU time on the Progress condition, so this wasting time is not good for scalability and
it is then not good for parallel programming, because so that it
"generalizes", it has also to work correctly not only for the much more read-mostly but also for other cases where the writes transactions are of more greater proportion "relative" to the whole of the readers and
writers transactions, so my scalable and fair lock-based read-write locks are better in this regard, since read about my new invention below of of the Holy Grail of locks that i will use, and 'you will notice that it is powerful.
 
2- Second weakness of the above invention, is that is is a complex
algorithm and this complexity of it is not good, and i think that lock-based algorithms are much simpler , so they are easy to think about and to implement.
 
I have invented a scalable algorithm that is a scalable fast Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my scalable Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new scalable Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- Not prone to convoying.
 
 
And about composability of lock-based systems now:
 
Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable:
 
"Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking.
 
There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
And you have to look here at our DelphiConcurrent and FreepascalConcurrent:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Jul 12 07:34PM -0700

Hello,
 
 
What is it to be a genius ?
 
To be a genius is also to be smart like genius.
 
And since i am a white arab , and since i think i am smart like a genius, since i have invented many scalable algorithms and there implementations, i will explain what is it to be a genius:
 
Read my following thoughts and you will notice:
 
https://community.idera.com/developer-tools/general-development/f/getit-and-third-party/72018/about-the-threadpool
 
Notice that i am saying the following:
 
"But i think that both the methodologies from Microsoft of the Hill Climbing and of the Control Theory using band pass filter or match filter and discrete Fourier transform have a weakness, there weakness is that they are "localized" optimization that maximize the throughput
, so they are not fair"
 
So i think that it is smart, and notice that i am saying: ""localized" optimization that maximize the throughput, so they are not fair"
, and this shows by logical analogy what is it to be smart like a genius, since being smart like genius is also inventing algorithms that have a high degree of difficulty and to tries at best of the smartness
of the genius to generalize and maximize optimization, so for example the genius will invent an algorithm with a high degree of difficulty that has all the characteristics of the Holy Grail algorithm.
 
And in the dictionary optimization means:
 
https://dictionary.cambridge.org/dictionary/english/optimization
 
It means:
 
"the act of making something as good as possible"
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Jul 12 05:48PM -0700

Hello,
 
 
About Wait-free and Lock-free and about OneFile..
 
I am a white arab, and i think i am smart like a "genius",
since i have invented many scalable algorithms and there implementations, and today i will make you understand more:
 
Look at this new invention of a PhD researcher:
 
OneFile - The world's first wait-free Software Transactional Memory
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html
 
And look carefully at this video about it by its inventor:
 
Pedro Ramalhete — Wait-free data structures and wait-free transactions
 
https://www.youtube.com/watch?v=oDfr9w9p8XY
 
So i think that this invention is not so smart, because
i have just read its source code here:
 
https://github.com/pramalhe/OneFile/blob/master/stms/OneFileWF.hpp
 
And it says the following:
 
Progress condition of the readers transactions: wait-free (bounded by the number of threads + MAX_READ_TRIES)
 
and:
 
The progress condition of the writers transactions: wait-free (bounded
by the number of threads)
 
1- So as you are noticing it has a weakness and it is the same as the Lock-free one, and it is that it wastes CPU time on the Progress condition, so this wasting time is not good for scalability and
it is then not good for parallel programming, because so that it
"generalizes", it has also to work correctly not only for the much more read-mostly but also for other cases where the writes transactions are of more greater proportion "relative" to the whole of the readers and
writers transactions, so my scalable and fair lock-based read-write locks are better in this regard, since read about my new invention below of of the Holy Grail of locks that i will use, and 'you will notice that it is powerful.
 
2- Second weakness of the above invention, is that is is a complex
algorithm and this complexity of it is not good, and i think that lock-based algorithms are much simpler , so they are easy to think about and to implement.
 
I have invented a scalable algorithm that is a scalable fast Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my scalable Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new scalable Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- Not prone to convoying.
 
 
 
Thank you,
Amine Moulat Ramdane.
aminer68@gmail.com: Jul 12 09:15AM -0700

Hello,
 
 
The Lazarus team is glad to announce the release of Lazarus 2.0.10
 
As expected after the release of major version 3.2 of the Free Pascal compiler , the Lazarus development teamhastened to adapt his work to this novelty. It has therefore just announced the availability for download of the corrective version 2.0.10 of the open source IDE.
 
You can download Lazarus cross-platform IDE version 2.0.10 from:
 
https://www.lazarus-ide.org/
 
 
Thank you,
Amine Moulay Ramdane.
aminer68@gmail.com: Jul 12 09:06AM -0700

Hello,
 
 
Release of version 2.0.10 of the Lazarus cross-platform IDE
 
As expected after the release of major version 3.2 of the Free Pascal compiler , the Lazarus development teamhastened to adapt his work to this novelty. It has therefore just announced the availability for download of the corrective version 2.0.10 of the open source IDE. Why not directly release version 2.2, which promises substantial improvements to the IDE? Simply because we would have had to wait a few more months, the time to submit the novelties to the user community, while the latter has been waiting for a rejuvenated and more efficient compiler since ... 2017!
 
Read more here:
 
https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Flazarus.developpez.com%2Factu%2F307071%2FSortie-de-la-version-2-0-10-de-l-EDI-multiplateforme-Lazarus-Free-Pascal-3-2-sert-de-compilateur%2F
 
 
Thank you,
Amine Moulay Ramdane.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: