Wednesday, July 1, 2020

Digest for comp.programming.threads@googlegroups.com - 4 updates in 4 topics

aminer68@gmail.com: Jun 30 02:51PM -0700

Hello,
 
 
I am a white arab, and i think i am smart..
 
Now more about Transactional Memory..
 
Read the following about Transactional Memory:
 
https://blog.jessfraz.com/post/transactional-memory-and-tech-hype-waves/
 
And it says the following:
 
"With transactional memory you no longer have deadlocks but livelocks."
 
 
So as you are noticing that Transactional Memory is not so smart, because it allows composability but it is prone to livelock.
 
 
More about Intel TSX Hardware transactional memory:
 
Read my post here about it:
 
https://groups.google.com/forum/#!topic/comp.arch/6DKng-6KKVY
 
It says:
 
"TSX does not gaurantee forward progress, so there must always be a fallback non-TSX pathway. (complex transactions might always abort even without any contention because they overflow the speculation buffer. Even transactions that could run in theory might livelock forever if you don't have the right pauses to allow forward progress, so the fallback path is needed then too)."
 
So i think that Intel TSX is prone to deadlock since it needs a fallback non-TSX pathway since TSX does not gaurantee forward progress, and it has the same problem of Lock Elision, because one of the benefits of Transactional memory is that it solves the deadlock problem, but Lock elision is prone to deadlock.
 
I am a white arab, and i think i am smart like a genius.
 
Here is my other just new powerful invention..
 
Seqlocks are great for providing atomic access to a single value, but they don't compose. If there are many values each with their own seqlock, and a program wants to "atomically" read from them, it's out of luck. To fix that, i have just invented a scalable Single Sequence Multiple Data (SSMD) that permits to perform parallel independent writes and "atomically" parallel read from them, and like Seqlock it is lockfree on the readers side and and it has no livelock and it is starvation-free. This just new invention of mine is "powerful" and it is scalable and it works with both IO and Memory, but it looks like Transactional Memory.
 
More about my inventions and about Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"4) using locks is prone to convoying effects;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
I am a white arab and i am smart like a genius, and this PhD
researcher is not so smart, notice that he is saying:
 
"4) using locks is prone to convoying effects;"
 
And i think he is not right, because i have invented the Holy Grail
of Locks, and it is not prone to convoying, read my following writing
about it:
 
----------------------------------------------------------------------
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- Not prone to convoying.
------------------------------------------------------------------------------
 
 
Also he is saying the following:
 
"1) if we use more than one lock, we're subject to having deadlock"
 
 
But you have to look here at our DelphiConcurrent and FreepascalConcurrent:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
 
And here is my new invention..
 
 
I think a Seqlock is also a high-performance but restricted use of software Transactional Memory.
 
So i have just read about Seqlocks here on wikipedia:
 
https://en.wikipedia.org/wiki/Seqlock
 
And it says about Seqlock:
 
"The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve)."
 
 
I am a white arab, and i think i am smart, so i have just invented a variant of Seqlock that has no livelock (when also there is too much write activity or the reader is too slow) and it is starvation-free.
 
So i think my new invention that is a variant of Seqlock is powerful.
 
And More now about Lockfree and Waitfree and Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"5) mutual exclusion locks don't scale for read-only operations, it takes a reader-writer lock to have some scalability for read-only operations and even then, we either execute read-only operations or one write, but never both at the same time. Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
But i think that he is not right by saying the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees"
 
Because i am an inventor of many scalable algorithms and there implementations, and i have invented scalable and efficient
starvation-free reader-writer locks, read my following thoughts below
to notice it..
 
Also look at his following webpage:
 
OneFile - The world's first wait-free Software Transactional Memory
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html
 
But i think he is not right, because read the following thoughts that i have just posted that applies to waitfree and lockfree:
 
https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic
 
 
And read all my following thoughts to understand:
 
About Lock elision and Transactional memory..
 
I have just read the following:
 
Lock elision in the GNU C library
 
https://lwn.net/Articles/534758/
 
So it says the following:
 
"Lock elision uses the same programming model as normal locks, so it can be directly applied to existing programs. The programmer keeps using locks, but the locks are faster as they can use hardware transactional memory internally for more parallelism. Lock elision uses memory transactions as a fast path, while the slow path is still a normal lock. Deadlocks and other classic locking problems are still possible, because the transactions may fall back to a real lock at any time."
 
So i think this is not good, because one of the benefits of Transactional memory is that it solves the deadlock problem, but
Lock elision is prone to deadlock.
 
More about Locks and Transactional memory..
 
I have just looked at the following webpage about understanding Transactional memory performance:
 
https://www.cs.utexas.edu/users/witchel/pubs/porter10ispass-tm-slides.pdf
 
And as you are noticing, it says that in practice Transactional memory
is worse than Locks at high contention, and it says that in practice Transactional memory is 40% worse than Locks at 100% contention.
 
This is why i have invented scalable Locks and scalable RWLocks, read
my following thoughts to notice it:
 
 
About beating Moore's Law with software..
 
bmoore has responded to me the following:
 
https://groups.google.com/forum/#!topic/soc.culture.china/Uu15FIknU0s
 
So as you are noticing he is asking me the following:
 
"Are you talking about beating Moore's Law with software?"
 
But i think that there is some of the following constraints:
 
"Modern programing environments contribute to the problem of software bloat by placing ease of development and portable code above speed or memory usage. While this is a sound business model in a commercial environment, it does not make sense where IT resources are constrained. Languages such as Java, C-Sharp, and Python have opted for code portability and software development speed above execution speed and memory usage, while modern data storage and transfer standards such as XML and JSON place flexibility and readability above efficiency."
 
Read the following:
 
https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies
 
Also there remains the following to also beat Moores's Law:
 
"Improved Algorithms
 
Hardware improvements mean little if software cannot effectively use the resources available to it. The Army should shape future software algorithms by funding basic research on improved software algorithms to meet its specific needs. The Army should also search for new algorithms and techniques which can be applied to meet specific needs and develop a learning culture within its software community to disseminate this information."
 
 
And about scalable algorithms, as you know i am a white arab
that is an inventor of many scalable algorithms and there implementations, read my following thoughts to notice it:
 
About my new invention that is a scalable algorithm..
 
I am a white arab, and i think i am more smart,
and i think i am like a genius, because i have again just invented
a new scalable algorithm, but i will briefly talk about the following best scalable reader-writer lock inventions, the first one is the following:
 
Scalable Read-mostly Synchronization Using Passive Reader-Writer Locks
 
https://www.usenix.org/system/files/conference/atc14/atc14-paper-liu.pdf
 
You will notice that it has a first weakness that it is for TSO hardware memory model and the second weakness is that the writers latency is very expensive when there is few readers.
 
And here is the other best scalable reader-writer lock invention of Facebook:
 
SharedMutex is a reader-writer lock. It is small, very fast, scalable
on multi-core
 
Read here:
 
https://github.com/facebook/folly/blob/master/folly/SharedMutex.h
 
 
But you will notice that the weakness of this scalable reader-writer lock is that the priority can only be configured as the following:
 
SharedMutexReadPriority gives priority to readers,
SharedMutexWritePriority gives priority to writers.
 
 
So the weakness of this scalable reader-writer lock is that
you can have starvation with it.
 
So this is why i have just invented a scalable algorithm that is
a scalable reader-writer lock that is better than the above and that is starvation-free and that is fair and that has a small writers latency.
 
So i think mine is the best and i will sell many of my scalable algorithms to software companies such as Microsoft or Google or Embardero..
 
 
What is it to be an inventor of many scalable algorithms ?
 
The Holy Grail of parallel programming is to provide good speedup while
hiding or avoiding the pitfalls of concurrency. You have to understand it to be able to understand what i am doing, i am an inventor of
many scalable algorithms and there implementations, but how can we define the kind of inventor like me? i think there is the following kinds of inventors, the ones that are PhD researchers and inventors like Albert Einstein, and the ones that are engineers and inventors like Nikola Tesla, and i think that i am of the kind of inventor of Nikola Tesla, i am not a PhD researcher like Albert Einstein, i am like an engineer who invented many scalable algorithms and there implementations, so i am like the following inventor that we call Nikola Tesla:
 
https://en.wikipedia.org/wiki/Nikola_Tesla
 
But i think that both those PhD researchers that are inventors and those Engineers that are inventors are powerful.
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- It is not prone to convoying
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak
aminer68@gmail.com: Jun 30 01:47PM -0700

Hello,
 
 
More about Intel TSX Hardware transactional memory:
 
Read my post here about it:
 
https://groups.google.com/forum/#!topic/comp.arch/6DKng-6KKVY
 
It says:
 
"TSX does not gaurantee forward progress, so there must always be a fallback non-TSX pathway. (complex transactions might always abort even without any contention because they overflow the speculation buffer. Even transactions that could run in theory might livelock forever if you don't have the right pauses to allow forward progress, so the fallback path is needed then too)."
 
So i think that Intel TSX is prone to deadlock since it needs a fallback non-TSX pathway since TSX does not gaurantee forward progress, and it has the same problem of Lock Elision, because one of the benefits of Transactional memory is that it solves the deadlock problem, but Lock elision is prone to deadlock.
 
I am a white arab, and i think i am smart like a genius.
 
Read again, i correct about: Here is my other just new powerful invention..
 
Seqlocks are great for providing atomic access to a single value, but they don't compose. If there are many values each with their own seqlock, and a program wants to "atomically" read from them, it's out of luck. To fix that, i have just invented a scalable Single Sequence Multiple Data (SSMD) that permits to perform parallel independent writes and "atomically" parallel read from them, and like Seqlock it is lockfree on the readers side and and it has no livelock and it is starvation-free. This just new invention of mine is "powerful" and it is scalable and it works with both IO and Memory, but it looks like Transactional Memory.
 
More about my inventions and about Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"4) using locks is prone to convoying effects;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
I am a white arab and i am smart like a genius, and this PhD
researcher is not so smart, notice that he is saying:
 
"4) using locks is prone to convoying effects;"
 
And i think he is not right, because i have invented the Holy Grail
of Locks, and it is not prone to convoying, read my following writing
about it:
 
----------------------------------------------------------------------
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- Not prone to convoying.
------------------------------------------------------------------------------
 
 
Also he is saying the following:
 
"1) if we use more than one lock, we're subject to having deadlock"
 
 
But you have to look here at our DelphiConcurrent and FreepascalConcurrent:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
 
And here is my new invention..
 
 
I think a Seqlock is also a high-performance but restricted use of software Transactional Memory.
 
So i have just read about Seqlocks here on wikipedia:
 
https://en.wikipedia.org/wiki/Seqlock
 
And it says about Seqlock:
 
"The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve)."
 
 
I am a white arab, and i think i am smart, so i have just invented a variant of Seqlock that has no livelock (when also there is too much write activity or the reader is too slow) and it is starvation-free.
 
So i think my new invention that is a variant of Seqlock is powerful.
 
And More now about Lockfree and Waitfree and Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"5) mutual exclusion locks don't scale for read-only operations, it takes a reader-writer lock to have some scalability for read-only operations and even then, we either execute read-only operations or one write, but never both at the same time. Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
But i think that he is not right by saying the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees"
 
Because i am an inventor of many scalable algorithms and there implementations, and i have invented scalable and efficient
starvation-free reader-writer locks, read my following thoughts below
to notice it..
 
Also look at his following webpage:
 
OneFile - The world's first wait-free Software Transactional Memory
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html
 
But i think he is not right, because read the following thoughts that i have just posted that applies to waitfree and lockfree:
 
https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic
 
 
And read all my following thoughts to understand:
 
About Lock elision and Transactional memory..
 
I have just read the following:
 
Lock elision in the GNU C library
 
https://lwn.net/Articles/534758/
 
So it says the following:
 
"Lock elision uses the same programming model as normal locks, so it can be directly applied to existing programs. The programmer keeps using locks, but the locks are faster as they can use hardware transactional memory internally for more parallelism. Lock elision uses memory transactions as a fast path, while the slow path is still a normal lock. Deadlocks and other classic locking problems are still possible, because the transactions may fall back to a real lock at any time."
 
So i think this is not good, because one of the benefits of Transactional memory is that it solves the deadlock problem, but
Lock elision is prone to deadlock.
 
More about Locks and Transactional memory..
 
I have just looked at the following webpage about understanding Transactional memory performance:
 
https://www.cs.utexas.edu/users/witchel/pubs/porter10ispass-tm-slides.pdf
 
And as you are noticing, it says that in practice Transactional memory
is worse than Locks at high contention, and it says that in practice Transactional memory is 40% worse than Locks at 100% contention.
 
This is why i have invented scalable Locks and scalable RWLocks, read
my following thoughts to notice it:
 
 
About beating Moore's Law with software..
 
bmoore has responded to me the following:
 
https://groups.google.com/forum/#!topic/soc.culture.china/Uu15FIknU0s
 
So as you are noticing he is asking me the following:
 
"Are you talking about beating Moore's Law with software?"
 
But i think that there is some of the following constraints:
 
"Modern programing environments contribute to the problem of software bloat by placing ease of development and portable code above speed or memory usage. While this is a sound business model in a commercial environment, it does not make sense where IT resources are constrained. Languages such as Java, C-Sharp, and Python have opted for code portability and software development speed above execution speed and memory usage, while modern data storage and transfer standards such as XML and JSON place flexibility and readability above efficiency."
 
Read the following:
 
https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies
 
Also there remains the following to also beat Moores's Law:
 
"Improved Algorithms
 
Hardware improvements mean little if software cannot effectively use the resources available to it. The Army should shape future software algorithms by funding basic research on improved software algorithms to meet its specific needs. The Army should also search for new algorithms and techniques which can be applied to meet specific needs and develop a learning culture within its software community to disseminate this information."
 
 
And about scalable algorithms, as you know i am a white arab
that is an inventor of many scalable algorithms and there implementations, read my following thoughts to notice it:
 
About my new invention that is a scalable algorithm..
 
I am a white arab, and i think i am more smart,
and i think i am like a genius, because i have again just invented
a new scalable algorithm, but i will briefly talk about the following best scalable reader-writer lock inventions, the first one is the following:
 
Scalable Read-mostly Synchronization Using Passive Reader-Writer Locks
 
https://www.usenix.org/system/files/conference/atc14/atc14-paper-liu.pdf
 
You will notice that it has a first weakness that it is for TSO hardware memory model and the second weakness is that the writers latency is very expensive when there is few readers.
 
And here is the other best scalable reader-writer lock invention of Facebook:
 
SharedMutex is a reader-writer lock. It is small, very fast, scalable
on multi-core
 
Read here:
 
https://github.com/facebook/folly/blob/master/folly/SharedMutex.h
 
 
But you will notice that the weakness of this scalable reader-writer lock is that the priority can only be configured as the following:
 
SharedMutexReadPriority gives priority to readers,
SharedMutexWritePriority gives priority to writers.
 
 
So the weakness of this scalable reader-writer lock is that
you can have starvation with it.
 
So this is why i have just invented a scalable algorithm that is
a scalable reader-writer lock that is better than the above and that is starvation-free and that is fair and that has a small writers latency.
 
So i think mine is the best and i will sell many of my scalable algorithms to software companies such as Microsoft or Google or Embardero..
 
 
What is it to be an inventor of many scalable algorithms ?
 
The Holy Grail of parallel programming is to provide good speedup while
hiding or avoiding the pitfalls of concurrency. You have to understand it to be able to understand what i am doing, i am an inventor of
many scalable algorithms and there implementations, but how can we define the kind of inventor like me? i think there is the following kinds of inventors, the ones that are PhD researchers and inventors like Albert Einstein, and the ones that are engineers and inventors like Nikola Tesla, and i think that i am of the kind of inventor of Nikola Tesla, i am not a PhD researcher like Albert Einstein, i am like an engineer who invented many scalable algorithms and there implementations, so i am like the following inventor that we call Nikola Tesla:
 
https://en.wikipedia.org/wiki/Nikola_Tesla
 
But i think that both those PhD researchers that are inventors and those Engineers that are inventors are powerful.
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- It is not prone to convoying
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have
also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies.
 
And here is my other previous new invention of a scalable algorithm:
 
I have just read the following PhD paper about the
aminer68@gmail.com: Jun 30 10:27AM -0700

Hello,
 
 
I am a white arab, and i think i am smart like a genius.
 
Read again, i correct about: Here is my other just new powerful invention..
 
Seqlocks are great for providing atomic access to a single value, but they don't compose. If there are many values each with their own seqlock, and a program wants to "atomically" read from them, it's out of luck. To fix that, i have just invented a scalable Single Sequence Multiple Data (SSMD) that permits to perform parallel independent writes and "atomically" parallel read from them, and like Seqlock it is lockfree on the readers side and and it has no livelock and it is starvation-free. This just new invention of mine is "powerful" and it is scalable and it works with both IO and Memory, but it looks like Transactional Memory.
 
More about my inventions and about Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"4) using locks is prone to convoying effects;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
I am a white arab and i am smart like a genius, and this PhD
researcher is not so smart, notice that he is saying:
 
"4) using locks is prone to convoying effects;"
 
And i think he is not right, because i have invented the Holy Grail
of Locks, and it is not prone to convoying, read my following writing
about it:
 
----------------------------------------------------------------------
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- Not prone to convoying.
------------------------------------------------------------------------------
 
 
Also he is saying the following:
 
"1) if we use more than one lock, we're subject to having deadlock"
 
 
But you have to look here at our DelphiConcurrent and FreepascalConcurrent:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
 
And here is my new invention..
 
 
I think a Seqlock is also a high-performance but restricted use of software Transactional Memory.
 
So i have just read about Seqlocks here on wikipedia:
 
https://en.wikipedia.org/wiki/Seqlock
 
And it says about Seqlock:
 
"The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve)."
 
 
I am a white arab, and i think i am smart, so i have just invented a variant of Seqlock that has no livelock (when also there is too much write activity or the reader is too slow) and it is starvation-free.
 
So i think my new invention that is a variant of Seqlock is powerful.
 
And More now about Lockfree and Waitfree and Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"5) mutual exclusion locks don't scale for read-only operations, it takes a reader-writer lock to have some scalability for read-only operations and even then, we either execute read-only operations or one write, but never both at the same time. Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
But i think that he is not right by saying the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees"
 
Because i am an inventor of many scalable algorithms and there implementations, and i have invented scalable and efficient
starvation-free reader-writer locks, read my following thoughts below
to notice it..
 
Also look at his following webpage:
 
OneFile - The world's first wait-free Software Transactional Memory
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html
 
But i think he is not right, because read the following thoughts that i have just posted that applies to waitfree and lockfree:
 
https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic
 
 
And read all my following thoughts to understand:
 
About Lock elision and Transactional memory..
 
I have just read the following:
 
Lock elision in the GNU C library
 
https://lwn.net/Articles/534758/
 
So it says the following:
 
"Lock elision uses the same programming model as normal locks, so it can be directly applied to existing programs. The programmer keeps using locks, but the locks are faster as they can use hardware transactional memory internally for more parallelism. Lock elision uses memory transactions as a fast path, while the slow path is still a normal lock. Deadlocks and other classic locking problems are still possible, because the transactions may fall back to a real lock at any time."
 
So i think this is not good, because one of the benefits of Transactional memory is that it solves the deadlock problem, but
with Lock elision you bring back the deadlock problem.
 
More about Locks and Transactional memory..
 
I have just looked at the following webpage about understanding Transactional memory performance:
 
https://www.cs.utexas.edu/users/witchel/pubs/porter10ispass-tm-slides.pdf
 
And as you are noticing, it says that in practice Transactional memory
is worse than Locks at high contention, and it says that in practice Transactional memory is 40% worse than Locks at 100% contention.
 
This is why i have invented scalable Locks and scalable RWLocks, read
my following thoughts to notice it:
 
 
About beating Moore's Law with software..
 
bmoore has responded to me the following:
 
https://groups.google.com/forum/#!topic/soc.culture.china/Uu15FIknU0s
 
So as you are noticing he is asking me the following:
 
"Are you talking about beating Moore's Law with software?"
 
But i think that there is some of the following constraints:
 
"Modern programing environments contribute to the problem of software bloat by placing ease of development and portable code above speed or memory usage. While this is a sound business model in a commercial environment, it does not make sense where IT resources are constrained. Languages such as Java, C-Sharp, and Python have opted for code portability and software development speed above execution speed and memory usage, while modern data storage and transfer standards such as XML and JSON place flexibility and readability above efficiency."
 
Read the following:
 
https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies
 
Also there remains the following to also beat Moores's Law:
 
"Improved Algorithms
 
Hardware improvements mean little if software cannot effectively use the resources available to it. The Army should shape future software algorithms by funding basic research on improved software algorithms to meet its specific needs. The Army should also search for new algorithms and techniques which can be applied to meet specific needs and develop a learning culture within its software community to disseminate this information."
 
 
And about scalable algorithms, as you know i am a white arab
that is an inventor of many scalable algorithms and there implementations, read my following thoughts to notice it:
 
About my new invention that is a scalable algorithm..
 
I am a white arab, and i think i am more smart,
and i think i am like a genius, because i have again just invented
a new scalable algorithm, but i will briefly talk about the following best scalable reader-writer lock inventions, the first one is the following:
 
Scalable Read-mostly Synchronization Using Passive Reader-Writer Locks
 
https://www.usenix.org/system/files/conference/atc14/atc14-paper-liu.pdf
 
You will notice that it has a first weakness that it is for TSO hardware memory model and the second weakness is that the writers latency is very expensive when there is few readers.
 
And here is the other best scalable reader-writer lock invention of Facebook:
 
SharedMutex is a reader-writer lock. It is small, very fast, scalable
on multi-core
 
Read here:
 
https://github.com/facebook/folly/blob/master/folly/SharedMutex.h
 
 
But you will notice that the weakness of this scalable reader-writer lock is that the priority can only be configured as the following:
 
SharedMutexReadPriority gives priority to readers,
SharedMutexWritePriority gives priority to writers.
 
 
So the weakness of this scalable reader-writer lock is that
you can have starvation with it.
 
So this is why i have just invented a scalable algorithm that is
a scalable reader-writer lock that is better than the above and that is starvation-free and that is fair and that has a small writers latency.
 
So i think mine is the best and i will sell many of my scalable algorithms to software companies such as Microsoft or Google or Embardero..
 
 
What is it to be an inventor of many scalable algorithms ?
 
The Holy Grail of parallel programming is to provide good speedup while
hiding or avoiding the pitfalls of concurrency. You have to understand it to be able to understand what i am doing, i am an inventor of
many scalable algorithms and there implementations, but how can we define the kind of inventor like me? i think there is the following kinds of inventors, the ones that are PhD researchers and inventors like Albert Einstein, and the ones that are engineers and inventors like Nikola Tesla, and i think that i am of the kind of inventor of Nikola Tesla, i am not a PhD researcher like Albert Einstein, i am like an engineer who invented many scalable algorithms and there implementations, so i am like the following inventor that we call Nikola Tesla:
 
https://en.wikipedia.org/wiki/Nikola_Tesla
 
But i think that both those PhD researchers that are inventors and those Engineers that are inventors are powerful.
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- It is not prone to convoying
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have
also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies.
 
And here is my other previous new invention of a scalable algorithm:
 
I have just read the following PhD paper about the invention that we call counting networks and they are better than Software combining trees:
 
Counting Networks
 
http://people.csail.mit.edu/shanir/publications/AHS.pdf
 
And i have read the following PhD paper:
 
http://people.csail.mit.edu/shanir/publications/HLS.pdf
 
So as you are noticing they are saying in the conclusion that:
 
"Software combining trees and counting networks which are the only techniques we observed to be truly scalable"
 
But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. And i think i will sell my new algorithm
of a generally scalable counting networks to Microsoft or Google or Embarcadero or such software
aminer68@gmail.com: Jun 30 10:22AM -0700

Hello,
 
 
I am a white arab, and i think i am smart like a genius.
 
Here is my other just new powerful invention..
 
Seqlocks are great for providing atomic access to a single value, but they don't compose. If there are many values each with their own seqlock, and a program wants to "atomically" read from them, it's out of luck. To fix that, i have invented a scalable Single Sequence Multiple Data (SSMD) that permits to perform parallel independent writes and
"atomically" parallel read from them, and like Seqlock it is lockfree on the readers side and and it has no livelock and it is starvation-free. This just new invention of mine is "powerful" and it is scalable and it works with both IO and Memory, but it looks like Transactional Memory.
 
More about my inventions and about Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"4) using locks is prone to convoying effects;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
I am a white arab and i am smart like a genius, and this PhD
researcher is not so smart, notice that he is saying:
 
"4) using locks is prone to convoying effects;"
 
And i think he is not right, because i have invented the Holy Grail
of Locks, and it is not prone to convoying, read my following writing
about it:
 
----------------------------------------------------------------------
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- Not prone to convoying.
------------------------------------------------------------------------------
 
 
Also he is saying the following:
 
"1) if we use more than one lock, we're subject to having deadlock"
 
 
But you have to look here at our DelphiConcurrent and FreepascalConcurrent:
 
https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent
 
 
And here is my new invention..
 
 
I think a Seqlock is also a high-performance but restricted use of software Transactional Memory.
 
So i have just read about Seqlocks here on wikipedia:
 
https://en.wikipedia.org/wiki/Seqlock
 
And it says about Seqlock:
 
"The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve)."
 
 
I am a white arab, and i think i am smart, so i have just invented a variant of Seqlock that has no livelock (when also there is too much write activity or the reader is too slow) and it is starvation-free.
 
So i think my new invention that is a variant of Seqlock is powerful.
 
And More now about Lockfree and Waitfree and Locks..
 
I have just read the following thoughts of a PhD researcher, and he says the following:
 
"5) mutual exclusion locks don't scale for read-only operations, it takes a reader-writer lock to have some scalability for read-only operations and even then, we either execute read-only operations or one write, but never both at the same time. Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;"
 
Read more here:
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html
 
 
But i think that he is not right by saying the following:
 
"Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees"
 
Because i am an inventor of many scalable algorithms and there implementations, and i have invented scalable and efficient
starvation-free reader-writer locks, read my following thoughts below
to notice it..
 
Also look at his following webpage:
 
OneFile - The world's first wait-free Software Transactional Memory
 
http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html
 
But i think he is not right, because read the following thoughts that i have just posted that applies to waitfree and lockfree:
 
https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic
 
 
And read all my following thoughts to understand:
 
About Lock elision and Transactional memory..
 
I have just read the following:
 
Lock elision in the GNU C library
 
https://lwn.net/Articles/534758/
 
So it says the following:
 
"Lock elision uses the same programming model as normal locks, so it can be directly applied to existing programs. The programmer keeps using locks, but the locks are faster as they can use hardware transactional memory internally for more parallelism. Lock elision uses memory transactions as a fast path, while the slow path is still a normal lock. Deadlocks and other classic locking problems are still possible, because the transactions may fall back to a real lock at any time."
 
So i think this is not good, because one of the benefits of Transactional memory is that it solves the deadlock problem, but
with Lock elision you bring back the deadlock problem.
 
More about Locks and Transactional memory..
 
I have just looked at the following webpage about understanding Transactional memory performance:
 
https://www.cs.utexas.edu/users/witchel/pubs/porter10ispass-tm-slides.pdf
 
And as you are noticing, it says that in practice Transactional memory
is worse than Locks at high contention, and it says that in practice Transactional memory is 40% worse than Locks at 100% contention.
 
This is why i have invented scalable Locks and scalable RWLocks, read
my following thoughts to notice it:
 
 
About beating Moore's Law with software..
 
bmoore has responded to me the following:
 
https://groups.google.com/forum/#!topic/soc.culture.china/Uu15FIknU0s
 
So as you are noticing he is asking me the following:
 
"Are you talking about beating Moore's Law with software?"
 
But i think that there is some of the following constraints:
 
"Modern programing environments contribute to the problem of software bloat by placing ease of development and portable code above speed or memory usage. While this is a sound business model in a commercial environment, it does not make sense where IT resources are constrained. Languages such as Java, C-Sharp, and Python have opted for code portability and software development speed above execution speed and memory usage, while modern data storage and transfer standards such as XML and JSON place flexibility and readability above efficiency."
 
Read the following:
 
https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies
 
Also there remains the following to also beat Moores's Law:
 
"Improved Algorithms
 
Hardware improvements mean little if software cannot effectively use the resources available to it. The Army should shape future software algorithms by funding basic research on improved software algorithms to meet its specific needs. The Army should also search for new algorithms and techniques which can be applied to meet specific needs and develop a learning culture within its software community to disseminate this information."
 
 
And about scalable algorithms, as you know i am a white arab
that is an inventor of many scalable algorithms and there implementations, read my following thoughts to notice it:
 
About my new invention that is a scalable algorithm..
 
I am a white arab, and i think i am more smart,
and i think i am like a genius, because i have again just invented
a new scalable algorithm, but i will briefly talk about the following best scalable reader-writer lock inventions, the first one is the following:
 
Scalable Read-mostly Synchronization Using Passive Reader-Writer Locks
 
https://www.usenix.org/system/files/conference/atc14/atc14-paper-liu.pdf
 
You will notice that it has a first weakness that it is for TSO hardware memory model and the second weakness is that the writers latency is very expensive when there is few readers.
 
And here is the other best scalable reader-writer lock invention of Facebook:
 
SharedMutex is a reader-writer lock. It is small, very fast, scalable
on multi-core
 
Read here:
 
https://github.com/facebook/folly/blob/master/folly/SharedMutex.h
 
 
But you will notice that the weakness of this scalable reader-writer lock is that the priority can only be configured as the following:
 
SharedMutexReadPriority gives priority to readers,
SharedMutexWritePriority gives priority to writers.
 
 
So the weakness of this scalable reader-writer lock is that
you can have starvation with it.
 
So this is why i have just invented a scalable algorithm that is
a scalable reader-writer lock that is better than the above and that is starvation-free and that is fair and that has a small writers latency.
 
So i think mine is the best and i will sell many of my scalable algorithms to software companies such as Microsoft or Google or Embardero..
 
 
What is it to be an inventor of many scalable algorithms ?
 
The Holy Grail of parallel programming is to provide good speedup while
hiding or avoiding the pitfalls of concurrency. You have to understand it to be able to understand what i am doing, i am an inventor of
many scalable algorithms and there implementations, but how can we define the kind of inventor like me? i think there is the following kinds of inventors, the ones that are PhD researchers and inventors like Albert Einstein, and the ones that are engineers and inventors like Nikola Tesla, and i think that i am of the kind of inventor of Nikola Tesla, i am not a PhD researcher like Albert Einstein, i am like an engineer who invented many scalable algorithms and there implementations, so i am like the following inventor that we call Nikola Tesla:
 
https://en.wikipedia.org/wiki/Nikola_Tesla
 
But i think that both those PhD researchers that are inventors and those Engineers that are inventors are powerful.
 
You have to understand deeply what is to invent my scalable algorithms
and there implementations so that to understand that it is powerful,
i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts
to understand:
 
About fair and unfair locking..
 
I have just read the following lead engineer at Amazon:
 
Highly contended and fair locking in Java
 
https://brooker.co.za/blog/2012/09/10/locking.html
 
So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking.
 
I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation.
 
But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts:
 
More about research and software development..
 
I have just looked at the following new video:
 
Why is coding so hard...
 
https://www.youtube.com/watch?v=TAAXwrgd1U8
 
I am understanding this video, but i have to explain my work:
 
I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example:
 
Read the following of the senior research scientist that is called Dave Dice:
 
Preemption tolerant MCS locks
 
https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks
 
As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics:
 
1- Starvation-free
2- Tunable fairness
3- It keeps efficiently and very low its cache coherence traffic
4- Very good fast path performance
5- And it has a good preemption tolerance.
6- It is faster than scalable MCS lock
7- It is not prone to convoying
 
this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have
also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies.
 
And here is my other previous new invention of a scalable algorithm:
 
I have just read the following PhD paper about the invention that we call counting networks and they are better than Software combining trees:
 
Counting Networks
 
http://people.csail.mit.edu/shanir/publications/AHS.pdf
 
And i have read the following PhD paper:
 
http://people.csail.mit.edu/shanir/publications/HLS.pdf
 
So as you are noticing they are saying in the conclusion that:
 
"Software combining trees and counting networks which are the only techniques we observed to be truly scalable"
 
But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. And i think i will sell my new algorithm
of a generally scalable counting networks to Microsoft or Google or Embarcadero or such software companies.
 
So you have to be careful
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com.

No comments: