- I am like a genius - 1 Update
- My other new scalable algorithm is here.. - 1 Update
- About the strategy of "work depth-first; steal breadth-first".. - 1 Update
- Here is my new inventions that are my new variants of Scalable RWLocks that are powerful.. - 1 Update
- My invention that is my Scalable reference counting with efficient support for weak references was updated to version 1.38 - 1 Update
- My inventions that are my SemaMonitor and my SemaCondvar were updated to version 2.3 - 1 Update
- Here is my new invention of a scalable algorithm - 1 Update
- About Turing completeness and parallel programming.. - 1 Update
- About the full paper on the Swarm chip and more.. - 1 Update
- How to Monitor Your CPU Temperature - 1 Update
aminer68@gmail.com: Feb 19 04:35PM -0800 Hello, I am like a genius, because i have invented many scalable algorithms and i am inventing many scalable algorithms, you have to see me to believe, this is why i am talking here, i have showed you some of my scalable algorithms that i have invented, but this is not all, because i have invented many other scalable algorithms that i have not showed you here, and here is one more new invention that i have just invented right now: If you have noticed i have just implemented my EasyList here: https://sites.google.com/site/scalable68/easylist-for-delphi-and-freepascal But i have just enhanced its algorithm to be scalable in the Add() method and in the search methods, but it is not all , i will use for that my just new invention that is my generally scalable counting networks, also its parallel sort algorithm will become much much more scalable , because i will use for that my other invention of my fully my scalable Threadpool, and it will use a fully scalable parallel merging algorithm , and read below about my just new invention of generally scalable counting networks: Here is my new invention of a scalable algorithm: I have just read the following PhD paper about the invention that we call counting networks and they are better than Software combining trees: Counting Networks http://people.csail.mit.edu/shanir/publications/AHS.pdf And i have read the following PhD paper: http://people.csail.mit.edu/shanir/publications/HLS.pdf So as you are noticing they are saying in the conclusion that: "Software combining trees and counting networks which are the only techniques we observed to be truly scalable" But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. And i think i will sell my new algorithm of a generally scalable counting networks to Microsoft or Google or Embarcadero or such software companies. So you have to be careful with the actual counting networks algorithm that is not generally scalable. My other new invention is my scalable reference counting and here it is: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references And my other new invention is my scalable Fast Mutex that is really powerful, and here it is: About fair and unfair locking.. I have just read the following lead engineer at Amazon: Highly contended and fair locking in Java https://brooker.co.za/blog/2012/09/10/locking.html So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking. I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation. But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Tunable fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance 5- And it has a good preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 19 02:08PM -0800 Hello, My other new scalable algorithm is here.. As you have noticed i am an inventor of many scalable algorithms and there implementations, but i have just thought more and i think i have just again invented a new algorithm that is scalable, i will explain it, read my following previous writing: ----------------------------------------------------------- About parallel programming and concurrency.. Look at the following concurrency abstractions of microsoft: https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitany?view=netframework-4.8 https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.waitall?view=netframework-4.8 I will soon implement waitany() and waitall() concurrency abstractions for Delphi and Freepascal, with the timeout in milliseconds of course, and they will work with my efficient implementation of a Future, so you can will be able to wait for many futures with waitany() and waitall(). And about task canceletion like in microsoft TPL, i think it is not a good abstraction, because how do you know when you have to efficiently cancel a task or tasks ? so you are understanding that task cancelation is not a so efficient abstraction , so i will not implement it, because i think the waitany() and waitall() with Futures with the "timeout" in milliseconds are good concurrency abstractions. -------------------------------------------------------------- But my new algorithm is a WaitAny() that is fully "scalable", and it can be called from mutiple threads and it will be scalable, but my WaitAny() and WaitAll() will work with Futures and with Threads and with Event object and such, and it will be portable and scalable. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 19 12:53PM -0800 Hello, About the strategy of "work depth-first; steal breadth-first".. I have just read the following webpage: Why Too Many Threads Hurts Performance, and What to do About It https://www.codeguru.com/cpp/sample_chapter/article.php/c13533/Why-Too-Many-Threads-Hurts-Performance-and-What-to-do-About-It.htm Also I have just looked at the following interesting video about Go scheduler and Go concurrency: Dmitry Vyukov — Go scheduler: Implementing language with lightweight concurrency https://www.youtube.com/watch?v=-K11rY57K7k And i have just read the following webpage about the Threadpool of microsoft .NET 4.0: https://blogs.msdn.microsoft.com/jennifer/2009/06/26/work-stealing-in-net-4-0/ And as you are noticing the first web link above is speaking about the strategy of "work depth-first; steal breadth-first" , but we have to be more smart because i think that this strategy, that is advantageous for cache locality, works best for recursive algorithms, because a thread is taking the first task and after that the algorithm is recursive, so it will put the childs tasks inside the local work-stealing queue, and the other threads will start to take from the work-stealing queue, so the work will be distributed correctly, but as you will notice that this strategy works best for recursive algorithms, but when you you iteratively start many tasks, i think we will have much more contention on the work-stealing queue and this is a weakness of this strategy, other than that when it is not a recursive algorithm and the threads are receiving from the global queue so there will be high contention on the global queue and this is not good. MIT's Cilk and Go scheduler and the Threadpool of Microsoft and Intel® C++ TBB are using this strategy of "work depth-first; steal breadth-first". And as you are noticing that they are giving more preference to cache locality than scalability. But in my following invention of a Threadpool that scales very well i am giving more preference to scalability than to cache locality: https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well Other than that when you are doing IO with my Threadpool, you can use asychronous IO by starting a dedicated thread to IO to be more efficient, or you can start another of my Threadpool and use it for tasks that uses IO, you can use the same method when threads of the my Threadpool are waiting or sleeping.. Other than that for recursion and the stack overflow problem you can convert your function from a recursive to iterative to solve the problem of stack overflow. Other than that to be able to serve a great number of internet connections or TCP/IP socket connections you can use my Threadpool with my powerful Object oriented Stackful coroutines library for Delphi and FreePascal here: https://sites.google.com/site/scalable68/object-oriented-stackful-coroutines-library-for-delphi-and-freepascal Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 19 12:49PM -0800 Hello, Here is my new inventions that are my new variants of Scalable RWLocks that are powerful.. Author: Amine Moulay Ramdane Description: A fast, and scalable and starvation-free and fair and lightweight Multiple-Readers-Exclusive-Writer Lock called LW_RWLockX, the scalable LW_RWLockX does spin-wait, and also a fast and scalable and starvation-free and fair Multiple-Readers-Exclusive-Writer Lock called RWLockX, the scalable RWLockX doesn't spin-wait but uses my portable SemaMonitor and portable event objects , so it is energy efficient. The parameter of the constructors is the size of the array of the readers , so if the size of the array is equal to the number of parallel readers, so it will be scalable, but if the number of readers are greater than the size of the array , you will start to have contention, please look at the source code of my scalable algorithms to understand. I have used my following hash function to make my new variants of RWLocks scalable: --- function DJB2aHash(key:int64):uint64; var i: integer; key1:uint64; begin Result := 5381; for i := 1 to 8 do begin key1:=(key shr ((i-1)*8)) and $00000000000000ff; Result := ((Result shl 5) xor Result) xor key1; end; end; --- You can download them from: https://sites.google.com/site/scalable68/new-variants-of-scalable-rwlocks Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 19 12:48PM -0800 Hello, My invention that is my Scalable reference counting with efficient support for weak references was updated to version 1.38 You can download it from my website here: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references Thank you, Amine Moulay Ramdane |
aminer68@gmail.com: Feb 19 12:47PM -0800 Hello, My inventions that are my SemaMonitor and my SemaCondvar were updated to version 2.3, they have become efficient and powerful, please read the readme file to know more about the changes, and i have implemented an efficient Monitor over my SemaCondvar. Here is the description of my efficient Monitor inside the Monitor.pas file that you will find inside the zip file: Description: This is my implementation of a Monitor over my SemaCondvar. You will find the Monitor class inside the Monitor.pas file inside the zip file. When you set the first parameter of the constructor to true, the signal will not be lost if the threads are not waiting with wait() method, but when you set the first parameter of the construtor to false, if the threads are not waiting with the wait() method, the signal will be lost.. Second parameter of the constructor is the kind of Lock, you can set it to ctMLock to use my scalable node based lock called MLock, or you can set it to ctMutex to use a Mutex or you can set it to ctCriticalSection to use the TCriticalSection. Here is the methods of my efficient Monitor that i have implemented: TMonitor = class private cache0:typecache0; lock1:TSyncLock; obj:TSemaCondvar; cache1:typecache0; public constructor Create(bool:boolean=true;lock:TMyLocks=ctMLock); destructor Destroy; override; procedure Enter(); procedure Leave(); function Signal():boolean;overload; function Signal(nbr:long;var remains:long):boolean;overload; procedure Signal_All(); function Wait(const AMilliseconds:longword=INFINITE): boolean; function WaitersBlocked():long; end; The wait() method is for the threads to wait on the Monitor object for the signal to be signaled. If wait() fails, that can be that the number of waiters is greater than high(longword). And the signal() method will signal one time a waiting thread on the Monitor object, but if signal() fails , the returned value is false. the signal_all() method will signal all the waiting threads on the Monitor object. The signal(nbr:long;var remains:long) method will signal nbr of waiting threads, but if signal() fails, the remaining number of signals that were not signaled will be returned in the remains variable. and WaitersBlocked() will return the number of waiting threads on the Monitor object. and Enter() and Leave() methods to enter and leave the monitor's Lock. You can download the zip files from: https://sites.google.com/site/scalable68/semacondvar-semamonitor and the lightweight version is here: https://sites.google.com/site/scalable68/light-weight-semacondvar-semamonitor Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 19 12:46PM -0800 Hello, Here is my new invention of a scalable algorithm: I have just read about the following PhD paper about the invention that we call counting networks and they are better than Software combining trees: Counting Networks http://people.csail.mit.edu/shanir/publications/AHS.pdf And i have read the following PhD paper: http://people.csail.mit.edu/shanir/publications/HLS.pdf So as you are noticing they are saying in the conclusion that: "Software combining trees and counting networks which are the only techniques we observed to be truly scalable" But i just found that this counting networks algorithm is not generally scalable, and i have the logical proof here, this is why i have just come with a new invention that enhance the counting networks algorithm to be generally scalable. And i think i will sell my new algorithm of a generally scalable counting networks to Microsoft or Google or Embarcadero or such software companies. So you have to be careful with the actual counting network algorithm that is not generally scalable. My other new invention is my scalable reference counting and here it is: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references And my other new invention is my scalable Fast Mutex that is really powerful, and here it is: About fair and unfair locking.. I have just read the following lead engineer at Amazon: Highly contended and fair locking in Java https://brooker.co.za/blog/2012/09/10/locking.html So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking. I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation. But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Tunable fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance 5- And it has a good preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 19 12:44PM -0800 Hello, About Turing completeness and parallel programming.. You have to know that a Turing-complete system can be proven mathematically to be capable of performing any possible calculation or computer program. So now you are understanding what is the power of "expressiveness" that is Turing-complete. For example i am working with the Tool that is called "Tina"(read about it below), it is a powerful tool that permits to work on Petri nets and be able to know about the boundedness and liveness of Petri nets, for example Tina supports Timed Petri nets that are Turing-complete , so the power of there expressiveness is Turing-complete, but i think this level of expressiveness is good for parallel programming and such, but it is not an efficient high level expressiveness. But still Petri nets are good for parallel programming. Read the rest to know more: About deadlocks and race conditions in parallel programming.. I have just read the following paper: Deadlock Avoidance in Parallel Programs with Futures https://cogumbreiro.github.io/assets/cogumbreiro-gorn.pdf So as you are noticing you can have deadlocks in parallel programming by introducing circular dependencies among tasks waiting on future values or you can have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects, so you have to have a general tool that detects deadlocks, but if you are noticing that the tool called Valgrind for C++ can detect deadlocks only happening from Pthread locks , read the following to notice it: http://valgrind.org/docs/manual/hg-manual.html#hg-manual.lock-orders So this is not good, so you have to have a general way that permits to detect deadlocks on locks , mutexes, and deadlocks from introducing circular dependencies among tasks waiting on future values or deadlocks you may have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects etc. this is why i have talked before about this general way that detects deadlocks, and here it is, read my following thoughts: Yet more precision about the invariants of a system.. I was just thinking about Petri nets , and i have studied more Petri nets, they are useful for parallel programming, and what i have noticed by studying them, is that there is two methods to prove that there is no deadlock in the system, there is the structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following system of the given Petri net: Transpose(vector) * Incidence matrix = 0 So you apply the Gaussian Elimination or the Farkas algorithm to the incidence matrix to find the Place invariants, and as you will notice those place invariants calculations of the Petri nets look like Markov chains in mathematics, with there vector of probabilities and there transition matrix of probabilities, and you can, using Markov chains mathematically calculate where the vector of probabilities will "stabilize", and it gives you a very important information, and you can do it by solving the following mathematical system: Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities. Solving this system of equations is very important in economics and other fields, and you can notice that it is like calculating the invariants , because the invariant in the system above is the vector1 of probabilities that is obtained, and this invariant, like in the invariants of the structural analysis of Petri nets, gives you a very important information about the system, like where market shares will stabilize that is calculated this way in economics. About reachability analysis of a Petri net.. As you have noticed in my Petri nets tutorial example (read below), i am analysing the liveness of the Petri net, because there is a rule that says: If a Petri net is live, that means that it is deadlock-free. Because reachability analysis of a Petri net with Tina gives you the necessary information about boundedness and liveness of the Petri net. So if it gives you that the Petri net is "live" , so there is no deadlock in it. Tina and Partial order reduction techniques.. With the advancement of computer technology, highly concurrent systems are being developed. The verification of such systems is a challenging task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering. Tina is using "partial-order" reduction techniques aimed at preventing combinatorial explosion, Read more here to notice it: http://projects.laas.fr/tina/papers/qest06.pdf About modelizations and detection of race conditions and deadlocks in parallel programming.. I have just taken further a look at the following project in Delphi called DelphiConcurrent by an engineer called Moualek Adlene from France: https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas And i have just taken a look at the following webpage of Dr Dobb's journal: Detecting Deadlocks in C++ Using a Locks Monitor https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644 And i think that both of them are using technics that are not as good as analysing deadlocks with Petri Nets in parallel applications , for example the above two methods are only addressing locks or mutexes or reader-writer locks , but they are not addressing semaphores or event objects and such other synchronization objects, so they are not good, this is why i have written a tutorial that shows my methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is: https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets You have to get the powerful Tina software to run my Petri Net examples inside my tutorial, here is the powerful Tina software: http://projects.laas.fr/tina/ Also to detect race conditions in parallel programming you have to take a look at the following new tutorial that uses the powerful Spin tool: https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html This is how you will get much more professional at detecting deadlocks and race conditions in parallel programming. Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Feb 19 12:42PM -0800 Hello, About the full paper on the Swarm chip and more.. I have just read the following full PhD paper from MIT about the new Swarm chip: https://people.csail.mit.edu/sanchez/papers/2015.swarm.micro.pdf I think there are disadvantages with this chip, first it is using the same mechanisms as Transactional memory, but those mechanisms of Transactional memory are not so efficient(read below to know more), but we are already having Intel hardware Transactional memory , and i don't think it is globally faster on parallelism than actual hardware and software because look at the writing on the paper about the benchmarks and you will understand more. And about Transactional memory and more read my following thoughts: About Hardware Transactional Memory and my invention that is my powerful Fast Mutex: "As someone who has used TSX to optimize synchronization primitives, you can expect to see a ~15-20% performance increase, if (big if) your program is heavy on disjoint data access, i.e. a lock is needed for correctness, but conflicts are rare in practice. If you have a lot of threads frequently writing the same cache lines, you are probably going to see worse performance with TSX as opposed to traditional locking. It helps to think about TSX as transparently performing optimistic concurrency control, which is actually pretty much how it is implemented under the hood." Read more here: https://news.ycombinator.com/item?id=8169697 So as you are noticing, HTM (hardware transactional memory) and TM can not replace locks when doing IO and for highly contended critical sections, this is why i have invented my following powerful Fast Mutex: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Good fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance 5- And it has a decent preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. And about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 About deadlocks and race conditions in parallel programming.. I have just read the following paper: Deadlock Avoidance in Parallel Programs with Futures https://cogumbreiro.github.io/assets/cogumbreiro-gorn.pdf So as you are noticing you can have deadlocks in parallel programming by introducing circular dependencies among tasks waiting on future values or you can have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects, so you have to have a general tool that detects deadlocks, but if you are noticing that the tool called Valgrind for C++ can detect deadlocks only happening from Pthread locks , read the following to notice it: http://valgrind.org/docs/manual/hg-manual.html#hg-manual.lock-orders So this is not good, so you have to have a general way that permits to detect deadlocks on locks , mutexes, and deadlocks from introducing circular dependencies among tasks waiting on future values or deadlocks you may have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects etc. this is why i have talked before about this general way that detects deadlocks, and here it is, read my following thoughts: Yet more precision about the invariants of a system.. I was just thinking about Petri nets , and i have studied more Petri nets, they are useful for parallel programming, and what i have noticed by studying them, is that there is two methods to prove that there is no deadlock in the system, there is the structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following system of the given Petri net: Transpose(vector) * Incidence matrix = 0 So you apply the Gaussian Elimination or the Farkas algorithm to the incidence matrix to find the Place invariants, and as you will notice those place invariants calculations of the Petri nets look like Markov chains in mathematics, with there vector of probabilities and there transition matrix of probabilities, and you can, using Markov chains mathematically calculate where the vector of probabilities will "stabilize", and it gives you a very important information, and you can do it by solving the following mathematical system: Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities. Solving this system of equations is very important in economics and other fields, and you can notice that it is like calculating the invariants , because the invariant in the system above is the vector1 of probabilities that is obtained, and this invariant, like in the invariants of the structural analysis of Petri nets, gives you a very important information about the system, like where market shares will stabilize that is calculated this way in economics. About reachability analysis of a Petri net.. As you have noticed in my Petri nets tutorial example (read below), i am analysing the liveness of the Petri net, because there is a rule that says: If a Petri net is live, that means that it is deadlock-free. Because reachability analysis of a Petri net with Tina gives you the necessary information about boundedness and liveness of the Petri net. So if it gives you that the Petri net is "live" , so there is no deadlock in it. Tina and Partial order reduction techniques.. With the advancement of computer technology, highly concurrent systems are being developed. The verification of such systems is a challenging task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering. Tina is using "partial-order" reduction techniques aimed at preventing combinatorial explosion, Read more here to notice it: http://projects.laas.fr/tina/papers/qest06.pdf About modelizations and detection of race conditions and deadlocks in parallel programming.. I have just taken further a look at the following project in Delphi called DelphiConcurrent by an engineer called Moualek Adlene from France: https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas And i have just taken a look at the following webpage of Dr Dobb's journal: Detecting Deadlocks in C++ Using a Locks Monitor https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644 And i think that both of them are using technics that are not as good as analysing deadlocks with Petri Nets in parallel applications , for example the above two methods are only addressing locks or mutexes or reader-writer locks , but they are not addressing semaphores or event objects and such other synchronization objects, so they are not good, this is why i have written a tutorial that shows my methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is: https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets You have to get the powerful Tina software to run my Petri Net examples inside my tutorial, here is the powerful Tina software: http://projects.laas.fr/tina/ Also to detect race conditions in parallel programming you have to take a look at the following new tutorial that uses the powerful Spin tool: https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html This is how you will get much more professional at detecting deadlocks and race conditions in parallel programming. About Java and Delphi and Freepascal.. I have just read the following webpage: Java is not a safe language https://lemire.me/blog/2019/03/28/java-is-not-a-safe-language/ But as you have noticed the webpage says: - Java does not trap overflows But Delphi and Freepascal do trap overflows. And the webpage says: - Java lacks null safety But Delphi has null safety since i have just posted about it by saying the following: Here is MyNullable library for Delphi and FreePascal that brings null safety.. Java lacks null safety. When a function receives an object, this object might be null. That is, if you see 'String s' in your code, you often have no way of knowing whether 's' contains an actually String unless you check at runtime. Can you guess whether programmers always check? They do not, of course, In practice, mission-critical software does crash without warning due to null values. We have two decades of examples. In Swift or Kotlin, you have safe calls or optionals as part of the language. Here is MyNullable library for Delphi and FreePascal that brings null safety, you can read the html file inside the zip to know how it works, and you can download it from my website here: https://sites.google.com/site/scalable68/null-safety-library-for-delphi-and-freepascal And the webpage says: - Java allows data races But for Delphi and Freepascal i have just written about how to prevent data races by saying the following: Yet more precision about the invariants of a system.. I was just thinking about Petri nets , and i have studied more Petri nets, they are useful for parallel programming, and what i have noticed by studying them, is that there is two methods to prove that there is no deadlock in the system, there is the structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following system of the given Petri net: Transpose(vector) * Incidence matrix = 0 So you apply the Gaussian Elimination or the Farkas algorithm to the incidence matrix to find the Place invariants, and as you will notice those place invariants calculations of the Petri nets look like Markov chains in mathematics, with there vector of probabilities and there transition matrix of probabilities, and you can, using Markov chains mathematically calculate where the vector of probabilities will "stabilize", and it gives you a very important information, and you can do it by solving the following mathematical system: Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities. Solving this system of equations is very important in economics and other fields, and you can notice that it is like calculating the invariants , because the invariant in the system above is the vector1 of probabilities that is obtained, and this invariant, like in the invariants of the structural analysis of Petri nets, gives you a very important information about the system, like where market shares will stabilize that is calculated this way in economics. About reachability analysis of a Petri net.. As you have noticed in my Petri nets tutorial example (read below), i am analysing the liveness of the Petri net, because there is a rule that says: If a Petri net is live, that means that it is deadlock-free. Because reachability analysis of a Petri net with Tina gives you the necessary information about boundedness and liveness of the Petri net. So if it gives you that the Petri net is "live" , so there is no deadlock in it. Tina and Partial order reduction techniques.. With the advancement of computer technology, highly concurrent systems are being developed. The verification of such systems is a challenging task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to |
aminer68@gmail.com: Feb 19 09:03AM -0800 Hello, How to Monitor Your CPU Temperature Read more here: https://www.tomshardware.com/how-to/how-to-monitor-cpu-temp-temperature Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment