- Here is my new poem of Love with the song of Men At Work that is called Down Under.. - 1 Update
- Understanding the spirit of Rust.. - 1 Update
- Wich to choose? Delphi or C++ or C# or Java ? - 1 Update
- Why am i also programming in Delphi ? - 1 Update
- More about Delphi.. - 1 Update
- See What's New in RAD Studio 10.3.2 (It is for Delphi too) - 1 Update
- Cloud Computing in Delphi - 1 Update
aminer68@gmail.com: Jul 24 05:44PM -0700 Hello... Here is my new poem of Love with the song of Men At Work that is called Down Under.. I invite you to listen to this beautiful song of Men At Work - Down Under, reading at the same time my new poem of Love below (because it is better): https://www.youtube.com/watch?v=XfR9iY5y94s I have just decided to write fast the following poem of Love: It is a beautiful day my beautiful lover ! Since our love is the beautiful and right answer ! It is a beautiful day my beautiful lover ! Since it is the start of our future full of the beautiful flowers It is a beautiful day my beautiful lover ! Since our love is strong like a beautiful Bodybuilder ! It is a beautiful day my beautiful lover ! Since our love is like necessity like Water ! It is a beautiful day my beautiful lover ! Since the light of love is gushing from our souls like a geyser ! It is a beautiful day my beautiful lover ! Since our love really care like the beautiful nursers ! It is a beautiful day my beautiful lover ! Since our love is our beautiful shelter ! It is a beautiful day my beautiful lover ! Since our love is like a beautiful civilizer ! It is a beautiful day my beautiful lover ! So is our love like a beautiful perfection without rupture ! Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 24 04:31PM -0700 Hello, Understanding the spirit of Rust.. I think the spirit of Rust is like the spirit of ADA, they are especially designed for the very high standards of safety, like those of ADA, "but" i don't think we have to fear race conditions that Rust solve, because i think that race conditions are not so difficult to avoid when you are a decent knowledgeable programmer in parallel programming, so you have to understand what i mean, now we have to talk about the rest of the safety guaranties of Rust, there remain the problem of Deadlock, and i think that Rust is not solving this problem, but i have provided you with the DelphiConcurrent library for Delphi and Freepascal that detects deadlocks, and there is also the Memory Safety guaranties of Rust, here they are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns But notice that I have solved the number 1 and number 2 by inventing my scalable reference counting with efficient support for weak references for Delphi and Freepascal, read below to notice it, and for number 3 read my following thoughts to understand: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant , but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Good fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance (it has the same performance as the scalable MCS lock when there is contention.) 5- And it has a decent preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Read my following writing to know me more: More about computing and parallel computing.. The important guaranties of Memory Safety in Rust are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns I think i have solved Null Pointer Dereferences and also solved Dangling Pointers and also solved memory leaks for Delphi and Freepascal by inventing my "scalable" reference counting with efficient support for weak references and i have implemented it in Delphi and Freepascal, and reference counting in Rust and C++ is "not" scalable. About the (3) above that is Buffer Overruns, read here about Delphi and Freepascal: What's a buffer overflow and how to avoid it in Delphi? http://delphi.cjcsoft.net/viewthread.php?tid=49495 About Deadlock and Race conditions in Delphi and Freepascal: I have ported DelphiConcurrent to Freepascal, and i have also extended them with the support of my scalable RWLocks for Windows and Linux and with the support of my scalable lock called MLock for Windows and Linux and i have also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files inside the zip file to understand more. You can download DelphiConcurrent and FreepascalConcurrent for Delphi and Freepascal from: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way to build Delphi applications which involve parallel executed code based on threads like application servers. DelphiConcurrent provides to the programmers the internal mechanisms to write safer multi-thread code while taking a special care of performance and genericity. In concurrent applications a DEADLOCK may occurs when two threads or more try to lock two consecutive shared resources or more but in a different order. With DelphiConcurrent and FreepascalConcurrent, a DEADLOCK is detected and automatically skipped - before he occurs - and the programmer has an explicit exception describing the multi-thread problem instead of a blocking DEADLOCK which freeze the application with no output log (and perhaps also the linked clients sessions if we talk about an application server). Amine Moulay Ramdane has extended them with the support of his scalable RWLocks for Windows and Linux and with the support of his scalable lock called MLock for Windows and Linux and he has also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more. And please read the html file inside to learn more how to use it. About race conditions now: My scalable Adder is here.. As you have noticed i have just posted previously my modified versions of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in parallel programs. But i have just read the following about how to avoid race conditions in Parallel programming in most cases.. Here it is: https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/ This is why i have invented my following powerful scalable Adder to help you do the same as the above, please take a look at its source code to understand more, here it is: https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal Other than that, about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 And about Message Passing Process Communication Model and Shared Memory Process Communication Model: An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. However, shared memory model may create problems such as synchronization and memory protection that need to be addressed. Message passing's major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it's about time somebody said that message passing is considered harmful). Also some research shows that the total effort to write an MPI application is significantly higher than that required to write a shared-memory version of it. And more about my scalable reference counting with efficient support for weak references: My invention that is my scalable reference counting with efficient support for weak references version 1.35 is here.. Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.35, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful. Because I have just read the following web page: https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why: A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used. If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns. Therefore, reference counting in TComponent has been removed. Also because i have just added TAMInterfacedPersistent to my invention. To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation. I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems, and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable". And please read the readme file inside the zip file that i have just extended to make you understand more. You can download my new scalable reference counting with efficient support for weak references version 1.35 from: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 24 02:47PM -0700 Hello, Wich to choose? Delphi or C++ or C# or Java ? Read more here to understand more: https://jonlennartaasenden.wordpress.com/2016/10/18/why-c-coders-should-shut-up-about-delphi/ Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 24 12:27PM -0700 Hello, Why am i also programming in Delphi ? As you have noticed i am programming in Delphi, Delphi is not pascal, Delphi is modern Object pascal that supports generics and anonymous methods etc. it is "much" more sophisticated than pascal, and i am also programming in Freepascal in the Delphi mode, so now comes the question of why am i also programming in Delphi ? it is because Delphi is now much more productive and much more portable (you have to read my previous posts about Delphi to notice it), and also i am wanting to make Delphi more powerful since i have invented many scalable algorithms and i have implemented them in Delphi, please read the following to notice it: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant , but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Good fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance (it has the same performance as the scalable MCS lock when there is contention.) 5- And it has a decent preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Read my following writing to know me more: More about computing and parallel computing.. The important guaranties of Memory Safety in Rust are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns I think i have solved Null Pointer Dereferences and also solved Dangling Pointers and also solved memory leaks for Delphi and Freepascal by inventing my "scalable" reference counting with efficient support for weak references and i have implemented it in Delphi and Freepascal, and reference counting in Rust and C++ is "not" scalable. About the (3) above that is Buffer Overruns, read here about Delphi and Freepascal: What's a buffer overflow and how to avoid it in Delphi? http://delphi.cjcsoft.net/viewthread.php?tid=49495 About Deadlock and Race conditions in Delphi and Freepascal: I have ported DelphiConcurrent to Freepascal, and i have also extended them with the support of my scalable RWLocks for Windows and Linux and with the support of my scalable lock called MLock for Windows and Linux and i have also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files inside the zip file to understand more. You can download DelphiConcurrent and FreepascalConcurrent for Delphi and Freepascal from: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way to build Delphi applications which involve parallel executed code based on threads like application servers. DelphiConcurrent provides to the programmers the internal mechanisms to write safer multi-thread code while taking a special care of performance and genericity. In concurrent applications a DEADLOCK may occurs when two threads or more try to lock two consecutive shared resources or more but in a different order. With DelphiConcurrent and FreepascalConcurrent, a DEADLOCK is detected and automatically skipped - before he occurs - and the programmer has an explicit exception describing the multi-thread problem instead of a blocking DEADLOCK which freeze the application with no output log (and perhaps also the linked clients sessions if we talk about an application server). Amine Moulay Ramdane has extended them with the support of his scalable RWLocks for Windows and Linux and with the support of his scalable lock called MLock for Windows and Linux and he has also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more. And please read the html file inside to learn more how to use it. About race conditions now: My scalable Adder is here.. As you have noticed i have just posted previously my modified versions of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in parallel programs. But i have just read the following about how to avoid race conditions in Parallel programming in most cases.. Here it is: https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/ This is why i have invented my following powerful scalable Adder to help you do the same as the above, please take a look at its source code to understand more, here it is: https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal Other than that, about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 And about Message Passing Process Communication Model and Shared Memory Process Communication Model: An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. However, shared memory model may create problems such as synchronization and memory protection that need to be addressed. Message passing's major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it's about time somebody said that message passing is considered harmful). Also some research shows that the total effort to write an MPI application is significantly higher than that required to write a shared-memory version of it. And more about my scalable reference counting with efficient support for weak references: My invention that is my scalable reference counting with efficient support for weak references version 1.35 is here.. Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.35, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful. Because I have just read the following web page: https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why: A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used. If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns. Therefore, reference counting in TComponent has been removed. Also because i have just added TAMInterfacedPersistent to my invention. To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation. I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems, and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable". And please read the readme file inside the zip file that i have just extended to make you understand more. You can download my new scalable reference counting with efficient support for weak references version 1.35 from: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 24 12:05PM -0700 Hello, More about Delphi.. As you haven noticed i am programming with "modern" Object pascal of the Delphi compiler, and here is what supports Delphi (read the full webpage to understand): https://www.embarcadero.com/products/delphi Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 24 11:52AM -0700 Hello, See What's New in RAD Studio 10.3.2 (It is for Delphi too) https://www.youtube.com/watch?v=3tQ9KDpGsRQ Thank you, Amine Moulay Ramdane. |
aminer68@gmail.com: Jul 24 11:48AM -0700 Hello, As you have noticed i am also programming with "modern" Object Pascal of Delphi and Freepascal, also read the following about Cloud Computing in Delphi: Cloud Computing in Delphi RAD Studio provides Cloud components, which allow you to easily use cloud services from Amazon and Microsoft Azure. RAD Studio provides a framework that allows you to build cloud services and easily connect to your back-end services and databases. With the RAD Studio RAD Cloud deployment, you can move your data and services to the Cloud, making your applications accessible from virtually any platform or device from anywhere in the world. Read more here: http://delphiprogrammingdiary.blogspot.com/2018/10/amazon-web-services-cloud-computing-in.html Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment