- More about Energy efficiency.. - 1 Update
- What about garbage collection? - 1 Update
- More about compile time and build time.. - 1 Update
- My invention that is my efficient Threadpool engine with priorities that scales very well was updated to version 3.85 - 1 Update
- Five reasons to learn Delphi - 1 Update
- My Delphi projects work with C++Builder.. - 5 Updates
- You have to understand my spirit.. - 1 Update
- NASA and European Space Agency are also using Delphi.. - 1 Update
| aminer68@gmail.com: Nov 16 05:23PM -0800 Hello, More about Energy efficiency.. You have to be aware that parallelization of the software can lower power consumption, and here is the formula that permits you to calculate the power consumption of "parallel" software programs: Power consumption of the total cores = (The number of cores) * ( 1/(Parallel speedup))^3) * (Power consumption of the single core). Also read the following about energy efficiency: Energy efficiency isn't just a hardware problem. Your programming language choices can have serious effects on the efficiency of your energy consumption. We dive deep into what makes a programming language energy efficient. As the researchers discovered, the CPU-based energy consumption always represents the majority of the energy consumed. What Pereira et. al. found wasn't entirely surprising: speed does not always equate energy efficiency. Compiled languages like C, C++, Rust, and Ada ranked as some of the most energy efficient languages out there, and Java and FreePascal are also good at Energy efficiency. Read more here: https://jaxenter.com/energy-efficient-programming-languages-137264.html RAM is still expensive and slow, relative to CPUs And "memory" usage efficiency is important for mobile devices. So Delphi and FreePascal compilers are also still "useful" for mobile devices, because Delphi and FreePascal are good if you are considering time and memory or energy and memory, and the following pascal benchmark was done with FreePascal, and the benchmark shows that C, Go and Pascal do rather better if you're considering languages based on time and memory or energy and memory. Read again here to notice it: https://jaxenter.com/energy-efficient-programming-languages-137264.html Thank you, Amine Moulay Ramdane. |
| aminer68@gmail.com: Nov 16 05:23PM -0800 Hello, What about garbage collection? Read what said this serious specialist called Chris Lattner: "One thing that I don't think is debatable is that the heap compaction behavior of a GC (which is what provides the heap fragmentation win) is incredibly hostile for cache (because it cycles the entire memory space of the process) and performance predictability." "Not relying on GC enables Swift to be used in domains that don't want it - think boot loaders, kernels, real time systems like audio processing, etc." "GC also has several *huge* disadvantages that are usually glossed over: while it is true that modern GC's can provide high performance, they can only do that when they are granted *much* more memory than the process is actually using. Generally, unless you give the GC 3-4x more memory than is needed, you'll get thrashing and incredibly poor performance. Additionally, since the sweep pass touches almost all RAM in the process, they tend to be very power inefficient (leading to reduced battery life)." Read more here: https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20160208/009422.html Here is Chris Lattner's Homepage: http://nondot.org/sabre/ And here is Chris Lattner's resume: http://nondot.org/sabre/Resume.html#Tesla This why i have invented the following scalable algorithm and its implementation that makes Delphi and FreePascal more powerful: My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here.. Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful. Because I have just read the following web page: https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why: A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used. If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns. Therefore, reference counting in TComponent has been removed. Also because i have just added TAMInterfacedPersistent to my invention. To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation. I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems, and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable". And please read the readme file inside the zip file that i have just extended to make you understand more. You can download my new scalable reference counting with efficient support for weak references version 1.37 from: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references Thank you, Amine Moulay Ramdane. |
| aminer68@gmail.com: Nov 16 05:21PM -0800 Hello, More about compile time and build time.. Look here about Java it says: "Java Build Time Benchmarks I'm trying to get some benchmarks for builds and I'm coming up short via Google. Of course, build times will be super dependent on a million different things, but I'm having trouble finding anything comparable. Right now: We've got ~2 million lines of code and it takes about 2 hours for this portion to build (this excludes unit tests). What do your build times look like for similar sized projects and what did you do to make it that fast?" Read here to notice it: https://www.reddit.com/r/java/comments/4jxs17/java_build_time_benchmarks/ So 2 million lines of code of Java takes about 2 hours to build. And what do you think that 2 millions lines of code takes to Delphi ? Answer: Just about 20 seconds. Here is the proof from Embarcadero, read and look at the video to be convinced about Delphi: https://community.idera.com/developer-tools/b/blog/posts/compiling-a-million-lines-of-code-with-delphi C++ also takes "much" more time to compile than Delphi. This is why i said previously the following: I think Delphi is a single pass compiler, it is very fast at compile time, and i think C++ and Java and C# are multi pass compilers that are much slower than Delphi in compile time, but i think that the generated executable code of Delphi is still fast and is faster than C#. And what about the Advantages and disadvantages of single and multi pass compiler? And From Automata Theory we get that any Turing Machine that does 2 (or more ) pass over the tape, can be replaced with an equivalent one that makes only 1 pass, with a more complicated state machine. At the theoretical level, they the same. At a practical level, all modern compilers make only one pass over the source code. It typically translated into an internal representation that the different phases analyze and update. During flow analysis basic blocks are identified. Common sub expression are found and precomputed and results reused. During loop analysis, invariant code will be moved out the loop. During code emission registers are assigned and peephole analysis and code reduction is applied. Thank you, Amine Moulay Ramdane. |
| aminer68@gmail.com: Nov 16 04:33PM -0800 Hello, My invention that is my efficient Threadpool engine with priorities that scales very well was updated to version 3.85, it does also come with a ParallelFor() that scales very well. You can download the new version from: https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well Thank you, Amine Moulay Ramdane. |
| aminer68@gmail.com: Nov 16 11:53AM -0800 Hello, Five reasons to learn Delphi https://jonlennartaasenden.wordpress.com/2019/02/08/5reasons/ Thank you, Amine Moulay Ramdane. |
| Wisdom90 <d@d.d>: Nov 16 11:23AM -0500 Hello, My Delphi projects work with C++Builder.. Here is C++Builder: https://www.embarcadero.com/products/cbuilder Here is how to use my Delphi projects with C++Builder: Mixing Delphi and C++ https://community.idera.com/developer-tools/b/blog/posts/mixing-delphi-and-c Thank you, Amine Moulay Ramdane. |
| Bonita Montero <Bonita.Montero@gmail.com>: Nov 16 05:31PM +0100 Am 16.11.2019 um 17:23 schrieb Wisdom90: > Here is how to use my Delphi projects with C++Builder: > Mixing Delphi and C++ > https://community.idera.com/developer-tools/b/blog/posts/mixing-delphi-and-c No one has the weird idea to mix Delphi and C++. And often there is code with generic classes which can't be mixed. Accept it or not: Delphi is and will never be relevant in the world of sw-development. At the time when Delphi hit the market, the train for Pascal-derivatives had already come to an end. |
| Wisdom90 <d@d.d>: Nov 16 11:34AM -0500 On 11/16/2019 11:31 AM, Bonita Montero wrote: > and will never be relevant in the world of sw-development. At the time > when Delphi hit the market, the train for Pascal-derivatives had already > come to an end. You are not understanding, C++ is easy for me, i can learn it much more easily and quickly. Thank you, Amine Moulay Ramdane. |
| Bonita Montero <Bonita.Montero@gmail.com>: Nov 16 05:42PM +0100 >> come to an end. > You are not understanding, C++ is easy for me, i can learn it much more > easily and quickly. C++ isn't an easy language and even if you have been programming for a long time you will hit subtleties again and again. |
| aminer68@gmail.com: Nov 16 09:28AM -0800 On Saturday, November 16, 2019 at 11:42:34 AM UTC-5, Bonita Montero wrote: > > easily and quickly. > C++ isn't an easy language and even if you have been programming for a > long time you will hit subtleties again and again. I don't like C++, because it has inherited from C, and it is still a lower level language, because it is not even strictly typed, and it favors speed this is why it is not as high level as other languages. I understand you, but i think you are not realistic.. Because i think that Delphi is still powerful, since i have worked with it and i think that it is still powerful, what i also like in Delphi is that it is strictly typed language, but i think that C++ has inherited from C and it is not strictly typed as Delphi, and also Delphi is good at readability and more. Read the rest: Look at my "sportive" spirit.. About my scalable algorithms inventions.. I am a white arab, and i am a gentleman type of person, and i think that you know me too by my poetry that i wrote in front of you and that i posted here, but i am also a more serious computer developer, and i am also an inventor who has invented many scalable algorithms, read about them on my writing below: Here is my last scalable algorithm invention, read what i have just responded in comp.programming.threads: About my LRU scalable algorithm.. On 10/16/2019 7:48 AM, Bonita Montero on comp.programming.threads wrote: > in locked mode in very rare cases. And as I said inserting and > flushing is conventional locked access. > So the quest is for you: Can you guess what I did? And here is what i have just responded: I think i am also smart, so i have just quickly found a solution that is scalable and that is not your solution, so it needs my hashtable that is scalable and it needs my fully scalable FIFO queue that i have invented. And i think i will not patent it. But my solution is not Lockfree, it uses locks like in a Lock striping manner and it is scalable. And read about my other scalable algorithms inventions on my writing below: About the buffer overflow problem.. I wrote yesterday about buffer overflow in Delphi and Freepascal.. I think there is a "higher" abstraction in Delphi and Freepascal that does the job very well of avoiding buffer overflow, and it is the TMemoryStream class, since it behaves also like a pointer and it supports reallocmem() and freemem() on the pointer but with a higher level abstraction, look for example at my following example in Delphi and Freepascal, you will notice that contrary to pointers , that the memory stream is adapting with writebuffer() without the need of reserving the memory, and this is why it avoids the buffer overflow problem, read the following example to notice how i am using it with a PAnsichar type: ======================================== Program test; uses system.classes,system.sysutils; var P: PAnsiChar; Begin P:='Amine'; mem:=TMemorystream.create; mem.position:=0; mem.writebuffer(pointer(p)^,6); mem.position:=0; writeln(PAnsichar(mem.memory)); end. =================================== So since Delphi and Freepascal also detect the buffer overflow on dynamic arrays , so i think that Delphi and Freepascal are powerful tools. Read my previous thoughts below to understand more: And I have just read the following webpage about "Fearless Security: Memory safety": https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/ Here is the memory safety problems: 1- Misusing Free (use-after-free, double free) I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it. 2- Uninitialized variables This can be detected by the compilers of Delphi and Freepascal. 3- Dereferencing Null pointers I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it. 4- Buffer overflow and underflow This has been solved in Delphi by using madExcept, read here about it: http://help.madshi.net/DebugMm.htm You can buy it from here: http://www.madshi.net/ And about race conditions and deadlocks problems and more, read my following thoughts to understand: I will reformulate more smartly what about race conditions detection in Rust, so read it carefully: You can think of the borrow checker of Rust as a validator for a locking system: immutable references are shared read locks and mutable references are exclusive write locks. Under this mental model, accessing data via two independent write locks is not a safe thing to do, and modifying data via a write lock while there are readers alive is not safe either. So as you are noticing that the "mutable" references in Rust follow the Read-Write Lock pattern, so this is not good, because it is not like more fine-grained parallelism that permits us to run the writes in "parallel" and gain more performance from parallelizing the writes. Read more about Rust and Delphi and my inventions.. I think the spirit of Rust is like the spirit of ADA, they are especially designed for the very high standards of safety, like those of ADA, "but" i don't think we have to fear race conditions that Rust solve, because i think that race conditions are not so difficult to avoid when you are a decent knowledgeable programmer in parallel programming, so you have to understand what i mean, now we have to talk about the rest of the safety guaranties of Rust, there remain the problem of Deadlocks, and i think that Rust is not solving this problem, but i have provided you with an enhanced DelphiConcurrent library for Delphi and Freepascal that detects deadlocks, and there is also the Memory Safety guaranties of Rust, here they are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns But notice that I have solved the number 1 and number 2 by inventing my scalable reference counting with efficient support for weak references for Delphi and Freepascal, read below to notice it, and for number 3 read my following thoughts to understand: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Good fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance (it has the same performance as the scalable MCS lock when there is contention.) 5- And it has a decent preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Read my following writing to know me more: More about computing and parallel computing.. The important guaranties of Memory Safety in Rust are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns I think i have solved Null Pointer Dereferences and also solved Dangling Pointers and also solved memory leaks for Delphi and Freepascal by inventing my "scalable" reference counting with efficient support for weak references and i have implemented it in Delphi and Freepascal (Read about it below), and reference counting in Rust and C++ is "not" scalable. About the (3) above that is Buffer Overruns, read here about Delphi and Freepascal: What's a buffer overflow and how to avoid it in Delphi? read my above thoughts about it. About Deadlock and Race conditions in Delphi and Freepascal: I have ported DelphiConcurrent to Freepascal, and i have also extended them with the support of my scalable RWLocks for Windows and Linux and with the support of my scalable lock called MLock for Windows and Linux and i have also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files inside the zip file to understand more. You can download DelphiConcurrent and FreepascalConcurrent for Delphi and Freepascal from: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way to build Delphi applications which involve parallel executed code based on threads like application servers. DelphiConcurrent provides to the programmers the internal mechanisms to write safer multi-thread code while taking a special care of performance and genericity. In concurrent applications a DEADLOCK may occurs when two threads or more try to lock two consecutive shared resources or more but in a different order. With DelphiConcurrent and FreepascalConcurrent, a DEADLOCK is detected and automatically skipped - before he occurs - and the programmer has an explicit exception describing the multi-thread problem instead of a blocking DEADLOCK which freeze the application with no output log (and perhaps also the linked clients sessions if we talk about an application server). Amine Moulay Ramdane has extended them with the support of his scalable RWLocks for Windows and Linux and with the support of his scalable lock called MLock for Windows and Linux and he has also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more. And please read the html file inside to learn more how to use it. About race conditions now: My scalable Adder is here.. As you have noticed i have just posted previously my modified versions of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in parallel programs. But i have just read the following about how to avoid race conditions in Parallel programming in most cases.. Here it is: https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/ This is why i have invented my following powerful scalable Adder to help you do the same as the above, please take a look at its source code to understand more, here it is: https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal Other than that, about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 And about Message Passing Process Communication Model and Shared Memory Process Communication Model: An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. Read the following to notice it: Why did Windows NT move away from the microkernel? "The main reason that Windows NT became a hybrid kernel is speed. A microkernel-based system puts only the bare minimum system components in the kernel and runs the rest of them as user mode processes, known as servers. A form of inter-process communication (IPC), usually message passing, is used for communication between servers and the kernel. Microkernel-based systems are more stable than others; if a server crashes, it can be restarted without affecting the entire system, which couldn't be done if every system component was part of the kernel. However, because of the overhead incurred by IPC and context-switching, microkernels are slower than traditional kernels. Due to the performance costs of a microkernel, Microsoft decided to keep the structure of a microkernel, but run the system components in kernel space. Starting in Windows Vista, some drivers are also run in user mode." More about message passing.. An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. Read the following to notice it: "One problem that plagues microkernel implementations is relatively poor performance. The message-passing layer that connects different operating system components introduces an extra layer of machine instructions. The machine instruction overhead introduced by the message-passing subsystem manifests itself as additional execution time. In a monolithic system, if a kernel component needs to talk to another component, it can make direct function calls instead of going through a third party." However, shared memory model may create problems such as synchronization and memory protection that need to be addressed. Message passing's major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it's about time somebody said that message |
| aminer68@gmail.com: Nov 16 08:52AM -0800 Hello, You have to understand my spirit.. I want to make Delphi much more powerful by inventing scalable algorithms and implementing them in Delphi.. Here is one of my scalable algorithm and its implementation, read about it here: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references Thank you, Amine Moulay Ramdane. |
| aminer68@gmail.com: Nov 16 08:27AM -0800 Hello, NASA and European Space Agency are also using Delphi.. I am not stupid to also use Delphi and Freepascal compilers, and of course i am also using there modern Object Pascal (and in the Delphi mode of Freepascal), here is the proof that Delphi is a serious compiler: NASA is also using Delphi, read about it here: https://community.embarcadero.com/blogs/entry/want-moreexploration-40857 The European Space Agency is also using Delphi, read about it here: https://community.embarcadero.com/index.php/blogs/entry/delphi-s-involvement-with-the-esa-rosetta-comet-spacecraft-project-1 Read more here: https://glooscapsoftware.blogspot.com/2017/05/software-made-with-delphi-how-do-you.html Thank you, Amine Moulay Ramdane. |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.programming.threads+unsubscribe@googlegroups.com. |
No comments:
Post a Comment