- Read again, i correct about Fearless Security: Memory Safety.. - 1 Update
- Is it possible to use std::cout for binary output? - 9 Updates
- "GotW-ish: The 'clonable' pattern" by Herb Sutter - 2 Updates
- About Fearless Security: Memory Safety.. - 1 Update
- Best book to learn C++11 and C++14? - 1 Update
- Putting two colons before std -- is it ever needed? - 4 Updates
- Lock-free LRU-cache-algorithm - 2 Updates
aminer68@gmail.com: Sep 25 03:39PM -0700 Hello, Read again, i correct about Fearless Security: Memory Safety.. I have just read the following webpage about "Fearless Security: Memory safety": https://hacks.mozilla.org/2019/01/fearless-security-memory-safety/ Here is the memory safety problems: 1- Misusing Free (use-after-free, double free) I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it. 2- Uninitialized variables This can be detected by the compilers of Delphi and Freepascal. 3- Dereferencing Null pointers I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it. 4- Buffer overflow and underflow This has been solved in Delphi by using madExcept, read here about it: http://help.madshi.net/DebugMm.htm You can buy it from here: http://www.madshi.net/ And about race conditions and deadlocks problems and more, read my following thoughts to understand: I will reformulate more smartly what about race conditions detection in Rust, so read it carefully: You can think of the borrow checker of Rust as a validator for a locking system: immutable references are shared read locks and mutable references are exclusive write locks. Under this mental model, accessing data via two independent write locks is not a safe thing to do, and modifying data via a write lock while there are readers alive is not safe either. So as you are noticing that the "mutable" references in Rust follow the Read-Write Lock pattern, so this is not good, because it is not like more fine-grained parallelism that permits us to run the writes in "parallel" and gain more performance from parallelizing the writes. Read more about Rust and Delphi and my inventions.. I think the spirit of Rust is like the spirit of ADA, they are especially designed for the very high standards of safety, like those of ADA, "but" i don't think we have to fear race conditions that Rust solve, because i think that race conditions are not so difficult to avoid when you are a decent knowledgeable programmer in parallel programming, so you have to understand what i mean, now we have to talk about the rest of the safety guaranties of Rust, there remain the problem of Deadlocks, and i think that Rust is not solving this problem, but i have provided you with an enhanced DelphiConcurrent library for Delphi and Freepascal that detects deadlocks, and there is also the Memory Safety guaranties of Rust, here they are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns But notice that I have solved the number 1 and number 2 by inventing my scalable reference counting with efficient support for weak references for Delphi and Freepascal, read below to notice it, and for number 3 read my following thoughts to understand: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Good fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance (it has the same performance as the scalable MCS lock when there is contention.) 5- And it has a decent preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Read my following writing to know me more: More about computing and parallel computing.. The important guaranties of Memory Safety in Rust are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns I think i have solved Null Pointer Dereferences and also solved Dangling Pointers and also solved memory leaks for Delphi and Freepascal by inventing my "scalable" reference counting with efficient support for weak references and i have implemented it in Delphi and Freepascal (Read about it below), and reference counting in Rust and C++ is "not" scalable. About the (3) above that is Buffer Overruns, read here about Delphi and Freepascal: What's a buffer overflow and how to avoid it in Delphi? read my above thoughts about it. About Deadlock and Race conditions in Delphi and Freepascal: I have ported DelphiConcurrent to Freepascal, and i have also extended them with the support of my scalable RWLocks for Windows and Linux and with the support of my scalable lock called MLock for Windows and Linux and i have also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files inside the zip file to understand more. You can download DelphiConcurrent and FreepascalConcurrent for Delphi and Freepascal from: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way to build Delphi applications which involve parallel executed code based on threads like application servers. DelphiConcurrent provides to the programmers the internal mechanisms to write safer multi-thread code while taking a special care of performance and genericity. In concurrent applications a DEADLOCK may occurs when two threads or more try to lock two consecutive shared resources or more but in a different order. With DelphiConcurrent and FreepascalConcurrent, a DEADLOCK is detected and automatically skipped - before he occurs - and the programmer has an explicit exception describing the multi-thread problem instead of a blocking DEADLOCK which freeze the application with no output log (and perhaps also the linked clients sessions if we talk about an application server). Amine Moulay Ramdane has extended them with the support of his scalable RWLocks for Windows and Linux and with the support of his scalable lock called MLock for Windows and Linux and he has also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more. And please read the html file inside to learn more how to use it. About race conditions now: My scalable Adder is here.. As you have noticed i have just posted previously my modified versions of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in parallel programs. But i have just read the following about how to avoid race conditions in Parallel programming in most cases.. Here it is: https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/ This is why i have invented my following powerful scalable Adder to help you do the same as the above, please take a look at its source code to understand more, here it is: https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal Other than that, about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 And about Message Passing Process Communication Model and Shared Memory Process Communication Model: An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. Read the following to notice it: Why did Windows NT move away from the microkernel? "The main reason that Windows NT became a hybrid kernel is speed. A microkernel-based system puts only the bare minimum system components in the kernel and runs the rest of them as user mode processes, known as servers. A form of inter-process communication (IPC), usually message passing, is used for communication between servers and the kernel. Microkernel-based systems are more stable than others; if a server crashes, it can be restarted without affecting the entire system, which couldn't be done if every system component was part of the kernel. However, because of the overhead incurred by IPC and context-switching, microkernels are slower than traditional kernels. Due to the performance costs of a microkernel, Microsoft decided to keep the structure of a microkernel, but run the system components in kernel space. Starting in Windows Vista, some drivers are also run in user mode." More about message passing.. An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. Read the following to notice it: "One problem that plagues microkernel implementations is relatively poor performance. The message-passing layer that connects different operating system components introduces an extra layer of machine instructions. The machine instruction overhead introduced by the message-passing subsystem manifests itself as additional execution time. In a monolithic system, if a kernel component needs to talk to another component, it can make direct function calls instead of going through a third party." However, shared memory model may create problems such as synchronization and memory protection that need to be addressed. Message passing's major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it's about time somebody said that message passing is considered harmful). Also some research shows that the total effort to write an MPI application is significantly higher than that required to write a shared-memory version of it. And more about my scalable reference counting with efficient support for weak references: My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here.. Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful. Because I have just read the following web page: https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why: A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used. If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns. Therefore, reference counting in TComponent has been removed. Also because i have just added TAMInterfacedPersistent to my invention. To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation. I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems, and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable". And please read the readme file inside the zip file that i have just extended to make you understand more. You can download my new scalable reference counting with efficient support for weak references version 1.37 from: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references And now i will talk about data dependency and parallel loops.. For a loop to be parallelized, every iteration must be independent of the others, one way to be sure of it is to execute the loop in the direction of the |
Melzzzzz <Melzzzzz@zzzzz.com>: Sep 25 05:38PM > text mode and I get \n -> \r\n in the output. > Is there a way to switch std::cout to behave as if I did > std::ofstream(fileName, std::ios_base::binary)? Google `write` member function of ostream... -- press any key to continue or any other to quit... U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi bili naoruzani. -- Mladen Gogala |
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 25 10:46AM -0700 On Wednesday, September 25, 2019 at 7:16:30 AM UTC+1, Paavo Helde wrote: > std::cout << "\n"; > } > This solution is Windows/MSVC-specific, but so is the problem. Also check out this: https://stackoverflow.com/questions/2273330/restore-the-state-of-stdcout-after-manipulating-it And look at the second answer that uses boost. |
legalize+jeeves@mail.xmission.com (Richard): Sep 25 07:02PM [Please do not mail me a copy of your followup] Melzzzzz <Melzzzzz@zzzzz.com> spake the secret code >> Is there a way to switch std::cout to behave as if I did >> std::ofstream(fileName, std::ios_base::binary)? >Google `write` member function of ostream... Unfortunately that doesn't work. Neither does the put method. You still get EOL translation. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
legalize+jeeves@mail.xmission.com (Richard): Sep 25 07:03PM [Please do not mail me a copy of your followup] Frederick Gotham <cauldwell.thomas@gmail.com> spake the secret code >Also check out this: >https://stackoverflow.com/questions/2273330/restore-the-state-of-stdcout-after-manipulating-it Yep, already looked there. The binary/text state of the stream isn't something you can change with a iostream manipulator and isn't something you can change with these flags. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Sep 25 07:13PM On Wed, 2019-09-25, Paavo Helde wrote: > std::cout << "\n"; > } > This solution is Windows/MSVC-specific, but so is the problem. Is it, really? The language is aware of the text/binary distinction, so I bet it specifies which one std::cout is, and whether you can portably do anything about it. (Personally I'm happy to be Unix-specific in my code, so I haven't had reason to look into this in detail. In my context, text and binary are the same thing.) /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 25 04:09PM -0400 On 9/25/19 3:13 PM, Jorgen Grahn wrote: > Is it, really? The language is aware of the text/binary distinction, > so I bet it specifies which one std::cout is, and whether you can > portably do anything about it. "The object cout controls output to a stream buffer associated with the object stdout, declared in <cstdio> (30.11.1)." (30.4.3p3) "1 The contents and meaning of the header <cstdio> are the same as the C standard library header <stdio.h>." (30.11.1p1) " stderr stdin stdout which are expressions of type ''pointer to FILE'' that point to the FILE objects associated, respectively, with the standard error, input, and output streams." (C2011 7.21.1p3). The C standard defines two functions that can set the mode of a stream: fopen() and freopen() (Annex K adds fopen_s() and freopen_s()). Since stdout starts out already open, freopen() is your only option. For freopen(filename, mode, stream), section 7.21.5.4 of the C standard says: "2 The freopen function opens the file whose name is the string pointed to by filename and associates the stream pointed to by stream with it. The mode argument is used just as in the fopen function. 272) 3 If filename is a null pointer, the freopen function attempts to change the mode of the stream to that specified by mode, as if the name of the file currently associated with the stream had been used. It is implementation-defined which changes of mode are permitted (if any), and under what circumstances." |
Paavo Helde <myfirstname@osa.pri.ee>: Sep 26 12:15AM +0300 On 25.09.2019 22:13, Jorgen Grahn wrote: > Is it, really? The language is aware of the text/binary distinction, > so I bet it specifies which one std::cout is, and whether you can > portably do anything about it. By default all streams are text, you need to add special ios_base::binary flag when opening them to get binary. The question is if there is a standard way to reopen std::cout in binary mode, and my googling attempts seem to say "no". I suspect this might not even be technically possible on some older OS-es with rigid file formats. > (Personally I'm happy to be Unix-specific in my code, so I haven't had > reason to look into this in detail. In my context, text and binary > are the same thing.) This sounds like a confession you have written an amount of non-portable code yourself over the years. |
Paavo Helde <myfirstname@osa.pri.ee>: Sep 26 12:26AM +0300 On 25.09.2019 23:09, James Kuyper wrote: > file currently associated with the stream had been used. It is > implementation-defined which changes of mode are permitted (if any), and > under what circumstances." Alas, this does not work under MSVC. If passed path=nullptr, freopen() fails with error "Bad file descriptor". |
Keith Thompson <kst-u@mib.org>: Sep 25 03:32PM -0700 >> under what circumstances." > Alas, this does not work under MSVC. If passed path=nullptr, freopen() > fails with error "Bad file descriptor". That's not surprising. That paragraph wasn't added until C99. In C90, calling freopen() with a null pointer for the filename had undefined behavior. On the other hand, C++11 refers to the C99 standard, so freopen() should behave as specified by C99. But that still doesn't require all mode changes to be supported. -- Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst> Will write code for food. void Void(void) { Void(); } /* The recursive call of the void */ |
Lynn McGuire <lynnmcguire5@gmail.com>: Sep 25 12:41PM -0500 "GotW-ish: The 'clonable' pattern" by Herb Sutter https://herbsutter.com/2019/09/24/gotw-ish-the-clonable-pattern/ "Yesterday, I received this question from a distinguished C++ expert who served on the ISO C++ committee for many years. The email poses a decades-old question that still has the classic roll-your-own answer in C++ Core Guidelines #C.130, and basically asks whether we've made significant progress toward automating this pattern in modern C++ compared to what we had back in the 1990s and 2000s." "Before I present my own answer, I thought I would share just the question and give all of you readers the opportunity to propose your own candidate answers first — kind of retro GotW-style, except that I didn't write the question myself. Here is the email, unedited except to fix one typo…" Lynn |
red floyd <no.spam@its.invalid>: Sep 25 03:17PM -0700 On 9/25/19 10:41 AM, Lynn McGuire wrote: > write the question myself. Here is the email, unedited except to fix one > typo…" > Lynn Would CRTP work here? I seem to recall it being used to inject member functions... |
aminer68@gmail.com: Sep 25 03:15PM -0700 Hello... About Fearless Security: Memory Safety.. I have just read the following webpage about "Fearless Security: Memory Safety": Here is the memory safety problems: 1- Misusing Free (use-after-free, double free) I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it. 2- Uninitialized variables This can be detected by the compilers of Delphi and Freepascal. 3- Dereferencing Null pointers I have solved this in Delphi and Freepascal by inventing a "Scalable" reference counting with efficient support for weak references. Read below about it. 4- Buffer overflow and underflow This has been solved in Delphi by using madExcept, read here about it: http://help.madshi.net/DebugMm.htm You can buy it from here: http://www.madshi.net/ And about race conditions and deadlocks problems and more, read my following thoughts to understand: I will reformulate more smartly what about race conditions detection in Rust, so read it carefully: You can think of the borrow checker of Rust as a validator for a locking system: immutable references are shared read locks and mutable references are exclusive write locks. Under this mental model, accessing data via two independent write locks is not a safe thing to do, and modifying data via a write lock while there are readers alive is not safe either. So as you are noticing that the "mutable" references in Rust follow the Read-Write Lock pattern, so this is not good, because it is not like more fine-grained parallelism that permits us to run the writes in "parallel" and gain more performance from parallelizing the writes. Read more about Rust and Delphi and my inventions.. I think the spirit of Rust is like the spirit of ADA, they are especially designed for the very high standards of safety, like those of ADA, "but" i don't think we have to fear race conditions that Rust solve, because i think that race conditions are not so difficult to avoid when you are a decent knowledgeable programmer in parallel programming, so you have to understand what i mean, now we have to talk about the rest of the safety guaranties of Rust, there remain the problem of Deadlocks, and i think that Rust is not solving this problem, but i have provided you with an enhanced DelphiConcurrent library for Delphi and Freepascal that detects deadlocks, and there is also the Memory Safety guaranties of Rust, here they are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns But notice that I have solved the number 1 and number 2 by inventing my scalable reference counting with efficient support for weak references for Delphi and Freepascal, read below to notice it, and for number 3 read my following thoughts to understand: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Good fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance (it has the same performance as the scalable MCS lock when there is contention.) 5- And it has a decent preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there inmplementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. Read my following writing to know me more: More about computing and parallel computing.. The important guaranties of Memory Safety in Rust are: 1- No Null Pointer Dereferences 2- No Dangling Pointers 3- No Buffer Overruns I think i have solved Null Pointer Dereferences and also solved Dangling Pointers and also solved memory leaks for Delphi and Freepascal by inventing my "scalable" reference counting with efficient support for weak references and i have implemented it in Delphi and Freepascal (Read about it below), and reference counting in Rust and C++ is "not" scalable. About the (3) above that is Buffer Overruns, read here about Delphi and Freepascal: What's a buffer overflow and how to avoid it in Delphi? read my above thoughts about it. About Deadlock and Race conditions in Delphi and Freepascal: I have ported DelphiConcurrent to Freepascal, and i have also extended them with the support of my scalable RWLocks for Windows and Linux and with the support of my scalable lock called MLock for Windows and Linux and i have also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files inside the zip file to understand more. You can download DelphiConcurrent and FreepascalConcurrent for Delphi and Freepascal from: https://sites.google.com/site/scalable68/delphiconcurrent-and-freepascalconcurrent DelphiConcurrent and FreepascalConcurrent by Moualek Adlene is a new way to build Delphi applications which involve parallel executed code based on threads like application servers. DelphiConcurrent provides to the programmers the internal mechanisms to write safer multi-thread code while taking a special care of performance and genericity. In concurrent applications a DEADLOCK may occurs when two threads or more try to lock two consecutive shared resources or more but in a different order. With DelphiConcurrent and FreepascalConcurrent, a DEADLOCK is detected and automatically skipped - before he occurs - and the programmer has an explicit exception describing the multi-thread problem instead of a blocking DEADLOCK which freeze the application with no output log (and perhaps also the linked clients sessions if we talk about an application server). Amine Moulay Ramdane has extended them with the support of his scalable RWLocks for Windows and Linux and with the support of his scalable lock called MLock for Windows and Linux and he has also added the support for a Mutex for Windows and Linux, please look inside the DelphiConcurrent.pas and FreepascalConcurrent.pas files to understand more. And please read the html file inside to learn more how to use it. About race conditions now: My scalable Adder is here.. As you have noticed i have just posted previously my modified versions of DelphiConcurrent and FreepascalConcurrent to deal with deadlocks in parallel programs. But i have just read the following about how to avoid race conditions in Parallel programming in most cases.. Here it is: https://vitaliburkov.wordpress.com/2011/10/28/parallel-programming-with-delphi-part-ii-resolving-race-conditions/ This is why i have invented my following powerful scalable Adder to help you do the same as the above, please take a look at its source code to understand more, here it is: https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal Other than that, about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 And about Message Passing Process Communication Model and Shared Memory Process Communication Model: An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. Read the following to notice it: Why did Windows NT move away from the microkernel? "The main reason that Windows NT became a hybrid kernel is speed. A microkernel-based system puts only the bare minimum system components in the kernel and runs the rest of them as user mode processes, known as servers. A form of inter-process communication (IPC), usually message passing, is used for communication between servers and the kernel. Microkernel-based systems are more stable than others; if a server crashes, it can be restarted without affecting the entire system, which couldn't be done if every system component was part of the kernel. However, because of the overhead incurred by IPC and context-switching, microkernels are slower than traditional kernels. Due to the performance costs of a microkernel, Microsoft decided to keep the structure of a microkernel, but run the system components in kernel space. Starting in Windows Vista, some drivers are also run in user mode." More about message passing.. An advantage of shared memory model is that memory communication is faster as compared to the message passing model on the same machine. Read the following to notice it: "One problem that plagues microkernel implementations is relatively poor performance. The message-passing layer that connects different operating system components introduces an extra layer of machine instructions. The machine instruction overhead introduced by the message-passing subsystem manifests itself as additional execution time. In a monolithic system, if a kernel component needs to talk to another component, it can make direct function calls instead of going through a third party." However, shared memory model may create problems such as synchronization and memory protection that need to be addressed. Message passing's major flaw is the inversion of control–it is a moral equivalent of gotos in un-structured programming (it's about time somebody said that message passing is considered harmful). Also some research shows that the total effort to write an MPI application is significantly higher than that required to write a shared-memory version of it. And more about my scalable reference counting with efficient support for weak references: My invention that is my scalable reference counting with efficient support for weak references version 1.37 is here.. Here i am again, i have just updated my scalable reference counting with efficient support for weak references to version 1.37, I have just added a TAMInterfacedPersistent that is a scalable reference counted version, and now i think i have just made it complete and powerful. Because I have just read the following web page: https://www.codeproject.com/Articles/1252175/Fixing-Delphis-Interface-Limitations But i don't agree with the writting of the guy of the above web page, because i think you have to understand the "spirit" of Delphi, here is why: A component is supposed to be owned and destroyed by something else, "typically" a form (and "typically" means in english: in "most" cases, and this is the most important thing to understand). In that scenario, reference count is not used. If you pass a component as an interface reference, it would be very unfortunate if it was destroyed when the method returns. Therefore, reference counting in TComponent has been removed. Also because i have just added TAMInterfacedPersistent to my invention. To use scalable reference counting with Delphi and FreePascal, just replace TInterfacedObject with my TAMInterfacedObject that is the scalable reference counted version, and just replace TInterfacedPersistent with my TAMInterfacedPersistent that is the scalable reference counted version, and you will find both my TAMInterfacedObject and my TAMInterfacedPersistent inside the AMInterfacedObject.pas file, and to know how to use weak references please take a look at the demo that i have included called example.dpr and look inside my zip file at the tutorial about weak references, and to know how to use delegation take a look at the demo that i have included called test_delegation.pas, and take a look inside my zip file at the tutorial about delegation that learns you how to use delegation. I think my Scalable reference counting with efficient support for weak references is stable and fast, and it works on both Windows and Linux, and my scalable reference counting scales on multicore and NUMA systems, and you will not find it in C++ or Rust, and i don't think you will find it anywhere, and you have to know that this invention of mine solves the problem of dangling pointers and it solves the problem of memory leaks and my scalable reference counting is "scalable". And please read the readme file inside the zip file that i have just extended to make you understand more. You can download my new scalable reference counting with efficient support for weak references version 1.37 from: https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references And now i will talk about data dependency and parallel loops.. For a loop to be parallelized, every iteration must be independent of the others, one way to be sure of it is to execute the loop in the direction of the incremented index of the loop and in the direction of the decremented index of the loop and verify |
Keith Thompson <kst-u@mib.org>: Sep 25 01:05PM -0700 >> Thanks! I found it online. > It's funny how casually people just admit to their illegal piracy of > intellectual property. That book is available from online retailers, in paper and ebook forms. I *hope* that's what Queequeg was referring to. -- Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst> Will write code for food. void Void(void) { Void(); } /* The recursive call of the void */ |
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 25 10:11AM -0400 On 9/25/19 10:02 AM, James Kuyper wrote: ... > kuyper::std::func(); > return 0; > } I made a mistake while creating that code, and after correcting that mistake I forgot that it invalidated some of my earlier test results. func() doesn't need to be defined inside ::kuyper::std; it's sufficient if it's declared inside ::kuyper. |
Frederick Gotham <cauldwell.thomas@gmail.com>: Sep 25 07:17AM -0700 On Wednesday, September 25, 2019 at 3:03:00 PM UTC+1, James Kuyper wrote: > kuyper::std::func(); > return 0; > } It just occurred to me that it might make sense to deliberately leave out the two colons before 'std' so that you can at a later date pull off a little hack like this. |
Stuart Redmann <DerTopper@web.de>: Sep 25 07:51PM +0200 > ::std::cout << sizeof(a) << " " << sizeof(b) << ::std::endl; > SomeFunc(); > } The majority of the readers of this group consider the additional :: as unnecessary noise since every coding guideline should forbid to define your own namespace std (inside one of your own namespaces; adding stuff to the global namespace std is AFAIK forbidden by the standard). However, a nested namespace std could be the result of auto-generated code, for example from some ORM framework. In that case I'd assume that the auto-generated code always uses fully qualified identifiers if there could be the slightest chance of clashes. For example, if there is a persistence module "std" containing an entity called "sin", it could clash with ::std::sin. So if the auto-generated code needed to call ::std::sin, it should do so by using a full qualification. Hand-written code should never need the additional qualification. Regards, Stuart |
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 25 03:37PM -0400 On 9/25/19 1:51 PM, Stuart Redmann wrote: ... > ... adding stuff to the > global namespace std is AFAIK forbidden by the standard). The relevant rule is a little more complicated than a simple prohibition: "The behavior of a C ++ program is undefined if it adds declarations or definitions to namespace std or to a namespace within namespace std unless otherwise specified. A program may add a template specialization for any standard library template to namespace std only if the declaration depends on a user-defined type and the specialization meets the standard library requirements for the original template and is not explicitly prohibited." (20.5.4.2.1p1). |
Juha Nieminen <nospam@thanks.invalid>: Sep 25 05:23PM >> before the court case Juha has referenced. > Yes, but concrete implementations of algorithms are still > applicable for patents in the US. So anybody can make their own implementation of the algorithm and it will be just fine? Well, good luck on your attempt at becoming a patent troll. Maybe you'll get a couple of bucks from some random company. |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 25 08:19PM +0200 > So anybody can make their own implementation of the > algorithm and it will be just fine? "Implementation" doesn't iclude concrete languages. Read the article you quoted. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment