- About Lock-free versus Lock-based algorithms.. - 1 Update
- More about Lock-free algorithms.. - 1 Update
- Fault-Tolerant Avionics - 1 Update
aminer68@gmail.com: Jul 16 02:36PM -0700 Hello, About Lock-free versus Lock-based algorithms.. I am a white arab, and i think i am smart, since i have invented many scalable algorithms and there implementations, but today i will say the following: I think those Lock-free algorithms are stupid because they are too complex and they are so much error prone, other than that read my following post to understand more about Lock-free versus Lock: https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic So the best way is to use my new powerful invention below of the Holy Grail of Locks, so read my following thoughts to understand: And I have just read the following IBM Research Report about Locks and convoying: The convoy phenomenon https://blog.acolyer.org/2019/07/01/the-convoy-phenomenon/ And i think that it is not so smart, because i am a white arab that is smart like a genius , and i have invented a the Holy Grail of Locks that is more powerful than the above, it is a scalable Fast Mutex that is faster than the scalable MCS Lock, read about it in my following thoughts: I have invented a scalable algorithm that is a scalable fast Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts to understand: About fair and unfair locking.. I have just read the following lead engineer at Amazon: Highly contended and fair locking in Java https://brooker.co.za/blog/2012/09/10/locking.html So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking. I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation. But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my scalable Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new scalable Fast Mutex has the following characteristics: 1- Starvation-free 2- Tunable fairness 3- It keeps efficiently and very low its cache coherence traffic 4- Very good fast path performance 5- And it has a good preemption tolerance. 6- It is faster than scalable MCS lock 7- Good at convoy-avoidance. And about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 And you have to look here at our DelphiConcurrent and FreepascalConcurrent: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 16 10:00AM -0700 Hello, More about Lock-free algorithms.. I am a white arab, and i think i am smart, and now i will speak again about Lock-free algorithms.. Here are the advantages of Lock-free algorithms: Thread-killing Immunity: Any thread forcefully killed in the system won't delay other threads. Signal Immunity: The C and C++Standards prohibit signals or asynchronous interrupts from calling many system routines such as malloc. If the interrupt calls malloc at the same time with an interrupted thread, that could cause deadlock. With lock-free routines, there's no such problem anymore: Threads can freely interleave execution. Priority Inversion Immunity: Priority inversion occurs when a low-priority thread holds a lock to a mutex needed by a high-priority thread. Such tricky conflicts must be resolved by the OS kernel. Wait-free and lock-free algorithms are immune to such problems. Lock-free algorithms are good at convoy-avoidance. But i have just thoughts more and i think that the most important things to have is a bounded Lock-free LIFO stack and a bounded Lock-free FIFO queue, since also i think that one of the most important is convoy-avoidance, since also i think the fine-grained lock-based Hashtables and Skiplists are still good at convoy-avoidance. For the rest about Lock-free versus Lock , read my following post: https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic More about my new invention of a lock-free bounded LIFO stack algorithm.. I have just invented a lock-free bounded LIFO stack algorithm and i have just made it work correctly in only one day, so i think version 1.04 is stable now. I think that my new lock-free bounded LIFO stack algorithm is really useful because it is not complicated , so it is easy to reason about and it doesn't need ABA prevention and it doesn't need Hazard pointers and it doesn't have false sharing, please look at its source code inside LockfreeStackBounded.pas inside the zipfile, in my next posts i will give you all the explanation of my new algorithm. Lockfree bounded LIFO stack and FIFO queue were updated to version 1.04 You can read about them and download them from my website here: https://sites.google.com/site/scalable68/lockfree-bounded-lifo-stack-and-fifo-queue And I have just read the following IBM Research Report about Locks and convoying: The convoy phenomenon https://blog.acolyer.org/2019/07/01/the-convoy-phenomenon/ And i think that it is not so smart, because i am a white arab that is smart like a genius , and i have invented a the Holy Grail of Locks that is more powerful than the above, it is a scalable Fast Mutex that is faster than the scalable MCS Lock, read about it in my following thoughts: I have invented a scalable algorithm that is a scalable fast Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts to understand: About fair and unfair locking.. I have just read the following lead engineer at Amazon: Highly contended and fair locking in Java https://brooker.co.za/blog/2012/09/10/locking.html So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking. I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation. But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my scalable Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new scalable Fast Mutex has the following characteristics: 1- Starvation-free 2- Tunable fairness 3- It keeps efficiently and very low its cache coherence traffic 4- Very good fast path performance 5- And it has a good preemption tolerance. 6- It is faster than scalable MCS lock 7- Not prone to convoying. And about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 And you have to look here at our DelphiConcurrent and FreepascalConcurrent: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 16 09:29AM -0700 Hello, Fault-Tolerant Avionics For safety-critical systems, fault tolerance must be used to tolerate design faults which are predominately software- and timing-related. It is not enough to eliminate almost all faults introduced in the later stages of a life cycle; assurance is needed that they have been eliminated, or are extremely improbable. Safety requirements for commercial aviation dictate that a failure causing loss of life must be extremely improbable, on the order of 10^-9 per flight-hour. The designer of safety-critical fault-tolerant systems should keep current with new development in this field since both design and validation methods continue to advance in capability. Read more here: https://www.cs.unc.edu/~anderson/teach/comp790/papers/fault_tolerance_avionics.pdf Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment