- Can I pass a shared_ptr to a thread as a temporary ? - 11 Updates
- basic oop question - 3 Updates
- About the full paper on the Swarm chip.. - 1 Update
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Feb 12 12:17AM On Tue, 11 Feb 2020 14:23:54 -0800 > This has a patent: > https://patents.google.com/patent/US5295262 > expired but fee related. Humm... It is simpler than that. The reason your code is defective is that the variable g_bar in shared namespace scope is modified by single_thread() without synchronization when it may also be read concurrently by multiple_threads(). std::shared_ptr is designed to be as thread safe as a raw pointer (or any other scalar): in other words its reference count is thread safe but if you modify a shared_ptr instance contemporaneously with another thread accessing or modifying that same instance then you have undefined behaviour, unless you happen to use the atomic access provided for in §20.8.2.6 of C++14. That is normal and as it should be. It would be wrong, and horribly inefficient in most uses, to take any other approach. That is not to say that intrusive pointers are not a good idea. They can make controlling object lifetime in multi-threaded code more obvious and thus less error prone. In particular, objects held by intrusive pointer can more easily hold a reference to themselves when they need to guarantee their own existence when accessing their own members in a logically atomic section of code, and so save additional locking. |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 11 04:45PM -0800 On 2/11/2020 4:17 PM, Chris Vine wrote: > contemporaneously with another thread accessing or modifying that > same instance then you have undefined behaviour, unless you happen to > use the atomic access provided for in §20.8.2.6 of C++14. Does atomic<shared_ptr<foo>> work for this wrt strong thread safety? There are different means to gain a true atomic smart pointer. Accomplishing this in a lock-free manner can be tricky. DWCAS makes it much easier. :^) That is |
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Feb 12 01:10AM On Tue, 11 Feb 2020 16:45:13 -0800 > There are different means to gain a true atomic smart pointer. > Accomplishing this in a lock-free manner can be tricky. DWCAS makes it > much easier. :^) I don't know what you mean by "strong thread safety" that goes beyond "thread safe" for this case, but yes you could do it with the standard's provided atomic operations for shared_ptr without incurring undefined behaviour ("Concurrent access to a shared_ptr object from multiple threads does not introduce a data race if the access is done exclusively via the [atomic] functions in this section ..."). All you have got in your case is atomic loads and an atomic store with at least acquire/release memory ordering. This may or may not be lock free, however. |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 11 05:29PM -0800 On 2/11/2020 5:10 PM, Chris Vine wrote: > All you have got in your case is atomic loads and an atomic store > with at least acquire/release memory ordering. This may or may not be > lock free, however. Well, the multiple_threads function needs to be able to atomically increment a reference _and_ atomically load the pointer at once. Basic thread safety is the same as a raw pointer, or int. Strong is full blown concurrent read write access no matter what. Check this out: http://www.1024cores.net/home/lock-free-algorithms/object-life-time-management/differential-reference-counting This requires strong thread safety because these threads do not own prior references to the global g_bar. static shared_ptr<bar> g_bar; // nullptr void multiple_threads() { for (;;) { // this does not work wrt the // last time I looked at shared_ptr shared_ptr<bar> local = g_bar; if (! local) continue; local->foobar(); } } |
Cholo Lennon <chololennon@hotmail.com>: Feb 11 11:49PM -0300 On 2/11/20 4:07 AM, Bonita Montero wrote: >> No need to avoid std::shared_ptr for this reason, std::make_shared() >> and std::allocate_shared() effectively do exactly this. > That's a matter of taste. Wow, you can't avoid trolling in every single answer that you receive. For sure you need a mental health doctor. You used to be a civilized participant, but nowadays you have serious problem. This newsgroup is a disaster: religious nuts, trolls like you, and only a few people interested in sharing their knowledge with respect. And when someone new appears is attacked with things like "this is off-topic" (because he/she dared to comment something related to Windows/QT or whatever... a dead group with more or less 10 regular participants has the luxury to expel newcomers, totally ridiculous) -- Cholo Lennon Bs.As. ARG |
Ian Collins <ian-news@hotmail.com>: Feb 12 05:12PM +1300 On 12/02/2020 15:49, Cholo Lennon wrote: > Wow, you can't avoid trolling in every single answer that you receive. > For sure you need a mental health doctor. You used to be a civilized > participant, but nowadays you have serious problem. It has always been rude. -- Ian. |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Feb 11 11:21PM -0800 On 2/11/2020 5:29 PM, Chris M. Thomasson wrote: > thread safety is the same as a raw pointer, or int. Strong is full blown > concurrent read write access no matter what. Check this out: > http://www.1024cores.net/home/lock-free-algorithms/object-life-time-management/differential-reference-counting Working with him way, back on comp.programming.threads. |
Bonita Montero <Bonita.Montero@gmail.com>: Feb 12 08:28AM +0100 >> For sure you need a mental health doctor. You used to be a civilized >> participant, but nowadays you have serious problem. > It has always been rude. What's rude about writing, that using make_shared, make_unique etc. is a matter of taste ? |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Feb 12 12:32PM On Wed, 2020-02-12, Bonita Montero wrote: >> It has always been rude. > What's rude about writing, that using make_shared, make_unique etc. > is a matter of taste ? That's not what you wrote. And I note that you now snipped the context. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Bonita Montero <Bonita.Montero@gmail.com>: Feb 12 02:06PM +0100 |
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Feb 12 03:01PM On Tue, 11 Feb 2020 17:29:54 -0800 > local->foobar(); > } > } The shared_ptr atomic functions provided by C++11 onwards must support both atomic increment of the reference (which is anyway provided by std::shared_ptr irrespective of the atomic functions) and atomic load and store of the pointer. Otherwise they could not honour the requirement not to create a data race. Whether they manage to do it without a mutex (ie lock-free) is another matter. You keep on writing example code (as above, again) which does not use the shared_ptr atomic functions in a case where they are necessary: of course in that case your code does not work. You need to use the right tool for the job - try reading §20.8.2.6 of C++14. |
Richard Damon <Richard@Damon-Family.org>: Feb 11 09:53PM -0500 On 2/4/20 11:47 AM, fir wrote: > and i want to use onlyt references inside to hold those conections, not pointers, > how ro do that? > tnx Simple: extern Window window; Food food(window); Snake snake(window, food); Window window(snake, food); Food and Snake must not actual use the window object in their constructors, but only use its address, so they can initialize a reference member but not much more |
fir <profesor.fir@gmail.com>: Feb 12 01:38AM -0800 W dniu środa, 12 lutego 2020 03:53:44 UTC+1 użytkownik Richard Damon napisał: > Food and Snake must not actual use the window object in their > constructors, but only use its address, so they can initialize a > reference member but not much more hmm, interesting.. i will try next time as it came to late and now im doing something quite other, but good to know if it really works (if so lots of answer on stack overflow (trashbin itself) seem to be a trash), tnx |
fir <profesor.fir@gmail.com>: Feb 12 04:29AM -0800 W dniu środa, 12 lutego 2020 10:39:06 UTC+1 użytkownik fir napisał: > > constructors, but only use its address, so they can initialize a > > reference member but not much more > hmm, interesting.. i will try next time as it came to late and now im doing something quite other, but good to know if it really works (if so lots of answer on stack overflow (trashbin itself) seem to be a trash), tnx i must say again stack overflow is a place for extreme idiots - and probably anybody should know that, this shit is so unfair as mingw download page is - it is like made to aggreviate people close to this is also a comp.lang.asm.x86 where there must be some hidden idiot as i wrote x86 assembler myself and cant post on normal assembly topics as some hiden moron deletes this for no reason free usenet stil being so much better, only if extreme trols fortunatelly moved out |
aminer68@gmail.com: Feb 11 04:47PM -0800 Hello.. About the full paper on the Swarm chip.. I have just read the following full PhD paper from MIT about the new Swarm chip: https://people.csail.mit.edu/sanchez/papers/2015.swarm.micro.pdf I think there are disadvantages with this chip, first it is using the same mechanisms as Transactional memory, but those mechanisms of Transactional memory are not so efficient(read below to know more), but we are already having Intel hardware Transactional memory , and i don't think it is globally faster on parallelism than actual hardware and software because look at the writing on the paper about the benchmarks and you will understand more. And about Transactional memory and more read my following thoughts: About Hardware Transactional Memory and my invention that is my powerful Fast Mutex: "As someone who has used TSX to optimize synchronization primitives, you can expect to see a ~15-20% performance increase, if (big if) your program is heavy on disjoint data access, i.e. a lock is needed for correctness, but conflicts are rare in practice. If you have a lot of threads frequently writing the same cache lines, you are probably going to see worse performance with TSX as opposed to traditional locking. It helps to think about TSX as transparently performing optimistic concurrency control, which is actually pretty much how it is implemented under the hood." Read more here: https://news.ycombinator.com/item?id=8169697 So as you are noticing, HTM (hardware transactional memory) and TM can not replace locks when doing IO and for highly contended critical sections, this is why i have invented my following powerful Fast Mutex: More about research and software development.. I have just looked at the following new video: Why is coding so hard... https://www.youtube.com/watch?v=TAAXwrgd1U8 I am understanding this video, but i have to explain my work: I am not like this techlead in the video above, because i am also an "inventor" that has invented many scalable algorithms and there implementions, i am also inventing effective abstractions, i give you an example: Read the following of the senior research scientist that is called Dave Dice: Preemption tolerant MCS locks https://blogs.oracle.com/dave/preemption-tolerant-mcs-locks As you are noticing he is trying to invent a new lock that is preemption tolerant, but his lock lacks some important characteristics, this is why i have just invented a new Fast Mutex that is adaptative and that is much much better and i think mine is the "best", and i think you will not find it anywhere, my new Fast Mutex has the following characteristics: 1- Starvation-free 2- Good fairness 3- It keeps efficiently and very low the cache coherence traffic 4- Very good fast path performance (it has the same performance as the scalable MCS lock when there is contention.) 5- And it has a decent preemption tolerance. this is how i am an "inventor", and i have also invented other scalable algorithms such as a scalable reference counting with efficient support for weak references, and i have invented a fully scalable Threadpool, and i have also invented a Fully scalable FIFO queue, and i have also invented other scalable algorithms and there implementations, and i think i will sell some of them to Microsoft or to Google or Embarcadero or such software companies. And about composability of lock-based systems now: Design your systems to be composable. Among the more galling claims of the detractors of lock-based systems is the notion that they are somehow uncomposable: "Locks and condition variables do not support modular programming," reads one typically brazen claim, "building large programs by gluing together smaller programs[:] locks make this impossible."9 The claim, of course, is incorrect. For evidence one need only point at the composition of lock-based systems such as databases and operating systems into larger systems that remain entirely unaware of lower-level locking. There are two ways to make lock-based systems completely composable, and each has its own place. First (and most obviously), one can make locking entirely internal to the subsystem. For example, in concurrent operating systems, control never returns to user level with in-kernel locks held; the locks used to implement the system itself are entirely behind the system call interface that constitutes the interface to the system. More generally, this model can work whenever a crisp interface exists between software components: as long as control flow is never returned to the caller with locks held, the subsystem will remain composable. Second (and perhaps counterintuitively), one can achieve concurrency and composability by having no locks whatsoever. In this case, there must be no global subsystem state—subsystem state must be captured in per-instance state, and it must be up to consumers of the subsystem to assure that they do not access their instance in parallel. By leaving locking up to the client of the subsystem, the subsystem itself can be used concurrently by different subsystems and in different contexts. A concrete example of this is the AVL tree implementation used extensively in the Solaris kernel. As with any balanced binary tree, the implementation is sufficiently complex to merit componentization, but by not having any global state, the implementation may be used concurrently by disjoint subsystems—the only constraint is that manipulation of a single AVL tree instance must be serialized. Read more here: https://queue.acm.org/detail.cfm?id=1454462 About deadlocks and race conditions in parallel programming.. I have just read the following paper: Deadlock Avoidance in Parallel Programs with Futures https://cogumbreiro.github.io/assets/cogumbreiro-gorn.pdf So as you are noticing you can have deadlocks in parallel programming by introducing circular dependencies among tasks waiting on future values or you can have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects, so you have to have a general tool that detects deadlocks, but if you are noticing that the tool called Valgrind for C++ can detect deadlocks only happening from Pthread locks , read the following to notice it: http://valgrind.org/docs/manual/hg-manual.html#hg-manual.lock-orders So this is not good, so you have to have a general way that permits to detect deadlocks on locks , mutexes, and deadlocks from introducing circular dependencies among tasks waiting on future values or deadlocks you may have deadlocks by introducing circular dependencies among tasks waiting on windows event objects or such synchronisation objects etc. this is why i have talked before about this general way that detects deadlocks, and here it is, read my following thoughts: Yet more precision about the invariants of a system.. I was just thinking about Petri nets , and i have studied more Petri nets, they are useful for parallel programming, and what i have noticed by studying them, is that there is two methods to prove that there is no deadlock in the system, there is the structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following system of the given Petri net: Transpose(vector) * Incidence matrix = 0 So you apply the Gaussian Elimination or the Farkas algorithm to the incidence matrix to find the Place invariants, and as you will notice those place invariants calculations of the Petri nets look like Markov chains in mathematics, with there vector of probabilities and there transition matrix of probabilities, and you can, using Markov chains mathematically calculate where the vector of probabilities will "stabilize", and it gives you a very important information, and you can do it by solving the following mathematical system: Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities. Solving this system of equations is very important in economics and other fields, and you can notice that it is like calculating the invariants , because the invariant in the system above is the vector1 of probabilities that is obtained, and this invariant, like in the invariants of the structural analysis of Petri nets, gives you a very important information about the system, like where market shares will stabilize that is calculated this way in economics. About reachability analysis of a Petri net.. As you have noticed in my Petri nets tutorial example (read below), i am analysing the liveness of the Petri net, because there is a rule that says: If a Petri net is live, that means that it is deadlock-free. Because reachability analysis of a Petri net with Tina gives you the necessary information about boundedness and liveness of the Petri net. So if it gives you that the Petri net is "live" , so there is no deadlock in it. Tina and Partial order reduction techniques.. With the advancement of computer technology, highly concurrent systems are being developed. The verification of such systems is a challenging task, as their state space grows exponentially with the number of processes. Partial order reduction is an effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently is often independent of their ordering. Tina is using "partial-order" reduction techniques aimed at preventing combinatorial explosion, Read more here to notice it: http://projects.laas.fr/tina/papers/qest06.pdf About modelizations and detection of race conditions and deadlocks in parallel programming.. I have just taken further a look at the following project in Delphi called DelphiConcurrent by an engineer called Moualek Adlene from France: https://github.com/moualek-adlene/DelphiConcurrent/blob/master/DelphiConcurrent.pas And i have just taken a look at the following webpage of Dr Dobb's journal: Detecting Deadlocks in C++ Using a Locks Monitor https://www.drdobbs.com/detecting-deadlocks-in-c-using-a-locks-m/184416644 And i think that both of them are using technics that are not as good as analysing deadlocks with Petri Nets in parallel applications , for example the above two methods are only addressing locks or mutexes or reader-writer locks , but they are not addressing semaphores or event objects and such other synchronization objects, so they are not good, this is why i have written a tutorial that shows my methodology of analysing and detecting deadlocks in parallel applications with Petri Nets, my methodology is more sophisticated because it is a generalization and it modelizes with Petri Nets the broader range of synchronization objects, and in my tutorial i will add soon other synchronization objects, you have to look at it, here it is: https://sites.google.com/site/scalable68/how-to-analyse-parallel-applications-with-petri-nets You have to get the powerful Tina software to run my Petri Net examples inside my tutorial, here is the powerful Tina software: http://projects.laas.fr/tina/ Also to detect race conditions in parallel programming you have to take a look at the following new tutorial that uses the powerful Spin tool: https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html This is how you will get much more professional at detecting deadlocks and race conditions in parallel programming. About Java and Delphi and Freepascal.. I have just read the following webpage: Java is not a safe language https://lemire.me/blog/2019/03/28/java-is-not-a-safe-language/ But as you have noticed the webpage says: - Java does not trap overflows But Delphi and Freepascal do trap overflows. And the webpage says: - Java lacks null safety But Delphi has null safety since i have just posted about it by saying the following: Here is MyNullable library for Delphi and FreePascal that brings null safety.. Java lacks null safety. When a function receives an object, this object might be null. That is, if you see 'String s' in your code, you often have no way of knowing whether 's' contains an actually String unless you check at runtime. Can you guess whether programmers always check? They do not, of course, In practice, mission-critical software does crash without warning due to null values. We have two decades of examples. In Swift or Kotlin, you have safe calls or optionals as part of the language. Here is MyNullable library for Delphi and FreePascal that brings null safety, you can read the html file inside the zip to know how it works, and you can download it from my website here: https://sites.google.com/site/scalable68/null-safety-library-for-delphi-and-freepascal And the webpage says: - Java allows data races But for Delphi and Freepascal i have just written about how to prevent data races by saying the following: Yet more precision about the invariants of a system.. I was just thinking about Petri nets , and i have studied more Petri nets, they are useful for parallel programming, and what i have noticed by studying them, is that there is two methods to prove that there is no deadlock in the system, there is the structural analysis with place invariants that you have to mathematically find, or you can use the reachability tree, but we have to notice that the structural analysis of Petri nets learns you more, because it permits you to prove that there is no deadlock in the system, and the place invariants are mathematically calculated by the following system of the given Petri net: Transpose(vector) * Incidence matrix = 0 So you apply the Gaussian Elimination or the Farkas algorithm to the incidence matrix to find the Place invariants, and as you will notice those place invariants calculations of the Petri nets look like Markov chains in mathematics, with there vector of probabilities and there transition matrix of probabilities, and you can, using Markov chains mathematically calculate where the vector of probabilities will "stabilize", and it gives you a very important information, and you can do it by solving the following mathematical system: Unknown vector1 of probabilities * transition matrix of probabilities = Unknown vector1 of probabilities. Solving this system of equations is very important in economics and other fields, and you can notice that it is like calculating the invariants , because the invariant in the system above is the vector1 of probabilities that is obtained, and this invariant, like in the invariants of the structural analysis of Petri nets, gives you a very important information about the system, like where market shares will stabilize that is calculated this way in economics. About reachability analysis of a Petri net.. As you have noticed in my Petri nets tutorial example (read below), i am analysing the liveness of the Petri net, because there is a rule that says: If a Petri net is live, that means that it is deadlock-free. Because reachability analysis of a Petri net with Tina gives you the necessary information about boundedness and liveness of the Petri net. So if it gives you that the Petri net is "live" , so there is no deadlock in it. Tina and Partial order reduction techniques.. With the advancement of computer technology, highly concurrent systems are being developed. The verification of such systems is a challenging task, as their state space grows exponentially with the |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment