- About the programming language Rust and concurrency.. - 1 Update
- More about Delphi and Freepascal and C# and Python.. - 1 Update
- Read my following thoughts about Volatile and Freepascal and Delphi.. - 1 Update
- More about my new invention of a lock-free bounded LIFO stack algorithm.. - 1 Update
- Lockfree bounded LIFO stack and FIFO queue were updated to version 1.04 - 1 Update
- Lockfree bounded LIFO stack and FIFO queue were updated to version 1.03 - 1 Update
- Lockfree bounded LIFO stack and FIFO queue were updated to version 1.02 - 1 Update
aminer68@gmail.com: Jul 10 07:05PM -0700 Hello.. About the programming language Rust and concurrency.. I am a white arab, and i think i am smart, because i have invented many scalable algorithms and there implementations, and today i will speak about Rust programming language: I think Rust is not smart, because if you need to prevent Deadlocks by ensuring composability , then you need Transactional Memory in Rust, but when you are required to use Transactional Memory in Rust in many situations, why then use the concepts of immutable references and mutable references of Rust ? this is why i think that Rust is not smart, also the concepts of immutable references and mutable references of Rust are too low level programming that is not efficiency of high level programming like what is wanting to do Transactional memory, this is also why i think that Rust is not smart, but Transactional Memory has its weakness and read my following thoughts to notice it: Now more about Transactional Memory.. Read the following about Transactional Memory: https://blog.jessfraz.com/post/transactional-memory-and-tech-hype-waves/ And it says the following: "With transactional memory you no longer have deadlocks but livelocks." So as you are noticing that Transactional Memory is not so smart, because it allows composability but it is prone to livelock. More about Intel TSX Hardware transactional memory: Read my post here about it: https://groups.google.com/forum/#!topic/comp.arch/6DKng-6KKVY It says: "TSX does not gaurantee forward progress, so there must always be a fallback non-TSX pathway. (complex transactions might always abort even without any contention because they overflow the speculation buffer. Even transactions that could run in theory might livelock forever if you don't have the right pauses to allow forward progress, so the fallback path is needed then too)." So i think that Intel TSX is prone to deadlock since it needs a fallback non-TSX pathway since TSX does not gaurantee forward progress, and it has the same problem of Lock Elision, because one of the benefits of Transactional memory is that it solves the deadlock problem, but Lock elision is prone to deadlock. I am a white arab, and i think i am smart like a genius. Here is my other just new powerful invention.. Seqlocks are great for providing atomic access to a single value, but they don't compose. If there are many values each with their own seqlock, and a program wants to "atomically" read from them, it's out of luck. To fix that, i have just invented a scalable Single Sequence Multiple Data (SSMD) that permits to perform parallel independent writes and "atomically" parallel read from them, and like Seqlock it is lockfree on the readers side and and it has no livelock and it is starvation-free. This just new invention of mine is "powerful" and it is scalable and it works with both IO and Memory, but it looks like Transactional Memory. More about my inventions and about Locks.. I have just read the following thoughts of a PhD researcher, and he says the following: "4) using locks is prone to convoying effects;" Read more here: http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html I am a white arab and i am smart like a genius, and this PhD researcher is not so smart, notice that he is saying: "4) using locks is prone to convoying effects;" And i think he is not right, because i have invented the Holy Grail of Locks, and it is not prone to convoying, read my following writing about it: ---------------------------------------------------------------------- You have to understand deeply what is to invent my scalable algorithms and there implementations so that to understand that it is powerful, i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts to understand: About fair and unfair locking.. I have just read the following lead engineer at Amazon: Highly contended and fair locking in Java https://brooker.co.za/blog/2012/09/10/locking.html So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking. I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation. But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Tunable fairness 3- It keeps efficiently and very low its cache coherence traffic 4- Very good fast path performance 5- And it has a good preemption tolerance. 6- It is faster than scalable MCS lock 7- Not prone to convoying. ------------------------------------------------------------------------------ Also he is saying the following: "1) if we use more than one lock, we're subject to having deadlock" But you have to look here at our DelphiConcurrent and FreepascalConcurrent: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent And here is my new invention.. I think a Seqlock is also a high-performance but restricted use of software Transactional Memory. So i have just read about Seqlocks here on wikipedia: https://en.wikipedia.org/wiki/Seqlock And it says about Seqlock: "The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve)." I am a white arab, and i think i am smart, so i have just invented a variant of Seqlock that has no livelock (when also there is too much write activity or the reader is too slow) and it is starvation-free. So i think my new invention that is a variant of Seqlock is powerful. And More now about Lockfree and Waitfree and Locks.. I have just read the following thoughts of a PhD researcher, and he says the following: "5) mutual exclusion locks don't scale for read-only operations, it takes a reader-writer lock to have some scalability for read-only operations and even then, we either execute read-only operations or one write, but never both at the same time. Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees;" Read more here: http://concurrencyfreaks.blogspot.com/2019/04/onefile-and-tail-latency.html But i think that he is not right by saying the following: "Until today, there is no known efficient reader-writer lock with starvation-freedom guarantees" Because i am an inventor of many scalable algorithms and there implementations, and i have invented scalable and efficient starvation-free reader-writer locks, read my following thoughts below to notice it.. Also look at his following webpage: OneFile - The world's first wait-free Software Transactional Memory http://concurrencyfreaks.blogspot.com/2019/04/onefile-worlds-first-wait-free-software.html But i think he is not right, because read the following thoughts that i have just posted that applies to waitfree and lockfree: https://groups.google.com/forum/#!topic/comp.programming.threads/F_cF4ft1Qic And read all my following thoughts to understand: About Lock elision and Transactional memory.. I have just read the following: Lock elision in the GNU C library https://lwn.net/Articles/534758/ So it says the following: "Lock elision uses the same programming model as normal locks, so it can be directly applied to existing programs. The programmer keeps using locks, but the locks are faster as they can use hardware transactional memory internally for more parallelism. Lock elision uses memory transactions as a fast path, while the slow path is still a normal lock. Deadlocks and other classic locking problems are still possible, because the transactions may fall back to a real lock at any time." So i think this is not good, because one of the benefits of Transactional memory is that it solves the deadlock problem, but Lock elision is prone to deadlock. More about Locks and Transactional memory.. I have just looked at the following webpage about understanding Transactional memory performance: https://www.cs.utexas.edu/users/witchel/pubs/porter10ispass-tm-slides.pdf And as you are noticing, it says that in practice Transactional memory is worse than Locks at high contention, and it says that in practice Transactional memory is 40% worse than Locks at 100% contention. This is why i have invented scalable Locks and scalable RWLocks, read my following thoughts to notice it: About beating Moore's Law with software.. bmoore has responded to me the following: https://groups.google.com/forum/#!topic/soc.culture.china/Uu15FIknU0s So as you are noticing he is asking me the following: "Are you talking about beating Moore's Law with software?" But i think that there is some of the following constraints: "Modern programing environments contribute to the problem of software bloat by placing ease of development and portable code above speed or memory usage. While this is a sound business model in a commercial environment, it does not make sense where IT resources are constrained. Languages such as Java, C-Sharp, and Python have opted for code portability and software development speed above execution speed and memory usage, while modern data storage and transfer standards such as XML and JSON place flexibility and readability above efficiency." Read the following: https://smallwarsjournal.com/jrnl/art/overcoming-death-moores-law-role-software-advances-and-non-semiconductor-technologies Also there remains the following to also beat Moores's Law: "Improved Algorithms Hardware improvements mean little if software cannot effectively use the resources available to it. The Army should shape future software algorithms by funding basic research on improved software algorithms to meet its specific needs. The Army should also search for new algorithms and techniques which can be applied to meet specific needs and develop a learning culture within its software community to disseminate this information." And about scalable algorithms, as you know i am a white arab that is an inventor of many scalable algorithms and there implementations, read my following thoughts to notice it: About my new invention that is a scalable algorithm.. I am a white arab, and i think i am more smart, and i think i am like a genius, because i have again just invented a new scalable algorithm, but i will briefly talk about the following best scalable reader-writer lock inventions, the first one is the following: Scalable Read-mostly Synchronization Using Passive Reader-Writer Locks https://www.usenix.org/system/files/conference/atc14/atc14-paper-liu.pdf You will notice that it has a first weakness that it is for TSO hardware memory model and the second weakness is that the writers latency is very expensive when there is few readers. And here is the other best scalable reader-writer lock invention of Facebook: SharedMutex is a reader-writer lock. It is small, very fast, scalable on multi-core Read here: https://github.com/facebook/folly/blob/master/folly/SharedMutex.h But you will notice that the weakness of this scalable reader-writer lock is that the priority can only be configured as the following: SharedMutexReadPriority gives priority to readers, SharedMutexWritePriority gives priority to writers. So the weakness of this scalable reader-writer lock is that you can have starvation with it. So this is why i have just invented a scalable algorithm that is a scalable reader-writer lock that is better than the above and that is starvation-free and that is fair and that has a small writers latency. So i think mine is the best and i will sell many of my scalable algorithms to software companies such as Microsoft or Google or Embardero.. What is it to be an inventor of many scalable algorithms ? The Holy Grail of parallel programming is to provide good speedup while hiding or avoiding the pitfalls of concurrency. You have to understand it to be able to understand what i am doing, i am an inventor of many scalable algorithms and there implementations, but how can we define the kind of inventor like me? i think there is the following kinds of inventors, the ones that are PhD researchers and inventors like Albert Einstein, and the ones that are engineers and inventors like Nikola Tesla, and i think that i am of the kind of inventor of Nikola Tesla, i am not a PhD researcher like Albert Einstein, i am like an engineer who invented many scalable algorithms and there implementations, so i am like the following inventor that we call Nikola Tesla: https://en.wikipedia.org/wiki/Nikola_Tesla But i think that both those PhD researchers that are inventors and those Engineers that are inventors are powerful. You have to understand deeply what is to invent my scalable algorithms and there implementations so that to understand that it is powerful, i give you an example: So i have invented a scalable algorithm that is a scalable Mutex that is remarkable and that is the Holy Grail of scalable Locks, it has the following characteristics, read my following thoughts to understand: About fair and unfair locking.. I have just read the following lead engineer at Amazon: Highly contended and fair locking in Java https://brooker.co.za/blog/2012/09/10/locking.html So as you are noticing that you can use unfair locking that can have starvation or fair locking that is slower than unfair locking. I think that Microsoft synchronization objects like the Windows critical section uses unfair locking, but they still can have starvation. But i think that this not the good way to do, because i am an inventor and i have invented a scalable Fast Mutex that is much more powerful , because with my Fast Mutex you are capable to tune the "fairness" of the lock, and my Fast Mutex is capable of more than that, read about it on my following thoughts: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave |
aminer68@gmail.com: Jul 10 01:46PM -0700 Hello, More about Delphi and Freepascal and C# and Python.. As you have noticed i am also using Delphi and Freepascal.. Here is some important thoughts to read carefully: "Delphi XE6 is considerably faster than C# 2013 and Python 3.4 in terms of response time. Python is stronger than the other two languages only in terms of code density. Delphi XE6 uses 50% less memory than C# 2013 and almost 54% less memory than Python. That is to say, the programs coded in Delphi XE6 can run faster by using less memory. The averages of compiled code density obtained from all tests show that Python is the strongest. The averages of compiled code density of Python, Delphi XE6, and C# 2013 are 83%, 55%, and 48% respectively. These figures indicate that code density not used by Python during compilation is low. In other words, Python does not produce many codes other than the program codes to be used for solving the problem. Delphi XE6 is stronger than C# 2013 in terms of code density. When all performance measurements are considered, Delphi XE6 is seen to be stronger than the other two languages. The present study proved that statistically. Python is quite weak in terms of both memory usage and response time. However, Python is distinctive relative to C# 2013 and Delphi XE6 in that it has an indentation-based simple syntax, is compatible with different platforms (e.g. Linux, Pardus, Windows), and is free." Read more here: A Performance Comparison Of C# 2013, Delphi Xe6, And Python 3.4 Languages https://pdfs.semanticscholar.org/a8e1/2ac9f4bdb3b47f79df26c7c27cb175afa139.pdf And more about Energy efficiency.. You have to be aware that parallelization of the software can lower power consumption, and here is the formula that permits you to calculate the power consumption of "parallel" software programs: Power consumption of the total cores = (The number of cores) * ( 1/(Parallel speedup))^3) * (Power consumption of the single core). Also read the following about energy efficiency: Energy efficiency isn't just a hardware problem. Your programming language choices can have serious effects on the efficiency of your energy consumption. We dive deep into what makes a programming language energy efficient. As the researchers discovered, the CPU-based energy consumption always represents the majority of the energy consumed. What Pereira et. al. found wasn't entirely surprising: speed does not always equate energy efficiency. Compiled languages like C, C++, Rust, and Ada ranked as some of the most energy efficient languages out there, and Java and FreePascal are also good at Energy efficiency. Read more here: https://jaxenter.com/energy-efficient-programming-languages-137264.html RAM is still expensive and slow, relative to CPUs And "memory" usage efficiency is important for mobile devices. So Delphi and FreePascal compilers are also still "useful" for mobile devices, because Delphi and FreePascal are good if you are considering time and memory or energy and memory, and the following pascal benchmark was done with FreePascal, and the benchmark shows that C, Go and Pascal do rather better if you're considering languages based on time and memory or energy and memory. Read again here to notice it: https://jaxenter.com/energy-efficient-programming-languages-137264.html More about Delphi and Freepascal.. More about compile time and build time.. Look here about Java it says: "Java Build Time Benchmarks I'm trying to get some benchmarks for builds and I'm coming up short via Google. Of course, build times will be super dependent on a million different things, but I'm having trouble finding anything comparable. Right now: We've got ~2 million lines of code and it takes about 2 hours for this portion to build (this excludes unit tests). What do your build times look like for similar sized projects and what did you do to make it that fast?" Read here to notice it: https://www.reddit.com/r/java/comments/4jxs17/java_build_time_benchmarks/ So 2 million lines of code of Java takes about 2 hours to build. And what do you think that 2 millions lines of code takes to Delphi ? Answer: Just about 20 seconds. Here is the proof from Embarcadero, read and look at the video to be convinced about Delphi: https://community.idera.com/developer-tools/b/blog/posts/compiling-a-million-lines-of-code-with-delphi C++ also takes "much" more time to compile than Delphi. This is why i said previously the following: I think Delphi is a single pass compiler, it is very fast at compile time, and i think C++ and Java and C# are multi pass compilers that are much slower than Delphi in compile time, but i think that the generated executable code of Delphi is still fast and is faster than C#. And what about the Advantages and disadvantages of single and multi pass compiler? And From Automata Theory we get that any Turing Machine that does 2 (or more ) pass over the tape, can be replaced with an equivalent one that makes only 1 pass, with a more complicated state machine. At the theoretical level, they the same. At a practical level, all modern compilers make only one pass over the source code. It typically translated into an internal representation that the different phases analyze and update. During flow analysis basic blocks are identified. Common sub expression are found and precomputed and results reused. During loop analysis, invariant code will be moved out the loop. During code emission registers are assigned and peephole analysis and code reduction is applied. More about Delphi.. As you have noticed i am also using Delphi and Freepascal, so read the following to know more about Delphi: Delphi for iOS and Android: The Natives are restless Delphi's Strengths and Weaknesses So what does all this mean for developers? It means that Delphi has its strengths and weaknesses, and as long as you are aware of them, you can choose Delphi for the right jobs. If you need cross-platform compatibility and want to deal with only one code base (mostly), Delphi is an excellent choice. It provides nice abstractions of the OS and its services, a relatively pretty GUI library, and native (very fast) access to the CPU. Delphi also has excellent DB connectivity, web services connectivity, and networking in general. This means Delphi is a good choice for: - Enterprise developers, who want to provide mobile access and don't really care about pixel-perfect look and feel - Scientific and number crunching developers, who need fast processing and a way to nicely display their results - Game developers, surprisingly enough, who want to develop cross-platform games which don't have "native" interfaces anyway and where FMX can provide fast-enough graphics. (In other words, not Madden level graphics but Angry Birds). - "Light" Apps which don't use a bazillion controls to interact with the user and don't need "pixel-perfect" responsiveness - Compelling apps where the user will forgive some idiosyncrasies because the app is so good. I mention this because Delphi allows you to concentrate on the app. Delphi provides the RAD and the cross-platform compatibility, you can concentrate on making the killer app. Read more here: http://riversoftavg.com/blogs/index.php/2013/09/28/delphi-for-ios-and-android-the-natives-are-restless/ Pascal still an advantage for some iOS, Android developers Many developers wouldn't dream of developing in Delphi with iOS or Android in mind, but with cross-platform compilers, companies sitting on years of solid code may suddenly find themselves with a second wind. Read more here: https://www.zdnet.com/article/pascal-still-an-advantage-for-some-ios-android-developers/ NASA is also using Delphi, read about it here: https://community.embarcadero.com/blogs/entry/want-moreexploration-40857 The European Space Agency is also using Delphi, read about it here: https://community.embarcadero.com/index.php/blogs/entry/delphi-s-involvement-with-the-esa-rosetta-comet-spacecraft-project-1 More about Energy efficiency.. You have to be aware that parallelization of the software can lower power consumption, and here is the formula that permits you to calculate the power consumption of "parallel" software programs: Power consumption of the total cores = (The number of cores) * ( 1/(Parallel speedup))^3) * (Power consumption of the single core). Also read the following about energy efficiency: Energy efficiency isn't just a hardware problem. Your programming language choices can have serious effects on the efficiency of your energy consumption. We dive deep into what makes a programming language energy efficient. As the researchers discovered, the CPU-based energy consumption always represents the majority of the energy consumed. What Pereira et. al. found wasn't entirely surprising: speed does not always equate energy efficiency. Compiled languages like C, C++, Rust, and Ada ranked as some of the most energy efficient languages out there, and Java and FreePascal are also good at Energy efficiency. Read more here: https://jaxenter.com/energy-efficient-programming-languages-137264.html RAM is still expensive and slow, relative to CPUs And "memory" usage efficiency is important for mobile devices. So Delphi and FreePascal compilers are also still "useful" for mobile devices, because Delphi and FreePascal are good if you are considering time and memory or energy and memory, and the following pascal benchmark was done with FreePascal, and the benchmark shows that C, Go and Pascal do rather better if you're considering languages based on time and memory or energy and memory. Read again here to notice it: https://jaxenter.com/energy-efficient-programming-languages-137264.html Embarcadero Launches LearnDelphi.org ... As you have noticed , i am also programming in Delphi and Freepascal, and now i will invite you to read the following news about Delphi: Embarcadero Launches LearnDelphi.org, a Delphi-Centric Learning Ecosystem, to Promote Delphi Education Read more here: https://apnews.com/Business%20Wire/0524383998734d05a34d5900fdfa0058 And here is the new website of LearnDelphi.org: https://www.learndelphi.org/ What about garbage collection? Read what said this serious specialist called Chris Lattner: "One thing that I don't think is debatable is that the heap compaction behavior of a GC (which is what provides the heap fragmentation win) is incredibly hostile for cache (because it cycles the entire memory space of the process) and performance predictability." "Not relying on GC enables Swift to be used in domains that don't want it - think boot loaders, kernels, real time systems like audio processing, etc." "GC also has several *huge* disadvantages that are usually glossed over: while it is true that modern GC's can provide high performance, they can only do that when they are granted *much* more memory than the process is actually using. Generally, unless you give the GC 3-4x more memory than is needed, you'll get thrashing and incredibly poor performance. Additionally, since the sweep pass touches almost all RAM in the process, they tend to be very power inefficient (leading to reduced battery life)." Read more here: https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html Here is Chris Lattner's Homepage: http://nondot.org/sabre/ And here is Chris Lattner's resume: http://nondot.org/sabre/Resume.html#Tesla This why i have invented the following scalable algorithm and its implementation that makes Delphi and FreePascal more powerful: My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here.. Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful. Because I have just read the following web page: https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why: A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used. If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns. Therefore, reference counting in TComponent has been removed. Also because i have just added TAMInterfacedPersistent to my invention. To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation. I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems, and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable". And please read the readme file inside the zip file that i have just extended to make you understand more. You can download my new scalable reference counting with efficient support for weak references version 1.37 from: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 10 12:54PM -0700 Hello, Read my following thoughts about Volatile and Freepascal and Delphi.. https://community.idera.com/developer-tools/programming-languages/f/delphi-language/71190/more-about-c-and-object-pascal-languages Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 10 11:43AM -0700 Hello, More about my new invention of a lock-free bounded LIFO stack algorithm.. I have just invented a lock-free bounded LIFO stack algorithm and i have just made it work correctly in only one day, so i think version 1.04 is stable now. I think that my new lock-free bounded LIFO stack algorithm is really useful because it is not complicated , so it is easy to reason about and it doesn't need ABA prevention and it doesn't need Hazard pointers and it doesn't have false sharing, please look at its source code inside LockfreeStackBounded.pas inside the zipfile, in my next posts i will give you all the explanation of my new algorithm. Lockfree bounded LIFO stack and FIFO queue were updated to version 1.04 You can read about them and download them from my website here: https://sites.google.com/site/scalable68/lockfree-bounded-lifo-stack-and-fifo-queue Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 10 10:57AM -0700 Hello, Lockfree bounded LIFO stack and FIFO queue were updated to version 1.04 I have just corrected a last thing, and i think they are working correctly now. You can read about them and download them from my website here: https://sites.google.com/site/scalable68/lockfree-bounded-lifo-stack-and-fifo-queue Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 10 09:54AM -0700 Hello, Lockfree bounded LIFO stack and FIFO queue were updated to version 1.03 I think they are working correctly now. You can read about them and download them from my website here: https://sites.google.com/site/scalable68/lockfree-bounded-lifo-stack-and-fifo-queue Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 10 07:46AM -0700 Hello, Lockfree bounded LIFO stack and FIFO queue were updated to version 1.02 You can read about them and download them from my website here: https://sites.google.com/site/scalable68/lockfree-bounded-lifo-stack-and-fifo-queue Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment