- Provably unprovable eliminates incompleteness--- [ Cantor was wrong about cardinality ] - 5 Updates
- Available C++ Libraries FAQ - 1 Update
- Why allocator<T>::rebind<U> was removed with C++20` - 3 Updates
- [Nix] another doubt for the possible memory leak problem - 6 Updates
- Is this valid C++-code? - 2 Updates
- Project review - 2 Updates
- std::thread - DESPERATE NEED HELP!!! - 4 Updates
peteolcott <Here@Home>: Sep 14 01:35PM -0500 On 9/13/2019 9:55 AM, Mr Flibble wrote: >> than [0.0, 1.0] proves that infinitesimal numbers do exist. > Nonsense, there is always a number smaller than an "infinitesimal number" ergo there is no "smallest" number ergo "infinitesimal number" is a nonsense concept. > /Flibble This is merely naysaying without any actual rebuttal. If there is is always a number smaller than an "infinitesimal number" then specify a number that is half the size of the difference in length of the above two sequences of points. -- Copyright 2019 Pete Olcott All rights reserved "Great spirits have always encountered violent opposition from mediocre minds." Albert Einstein |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Sep 14 10:52PM +0100 On 14/09/2019 19:35, peteolcott wrote: > If there is is always a number smaller than an "infinitesimal number" > then specify a number that is half the size of the difference in length > of the above two sequences of points. There is always a number which is half the size of the previous number so my assertion stands: "infinitesimal number" is a nonsense concept. /Flibble -- "Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin "You won't burn in hell. But be nice anyway." – Ricky Gervais "I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais "Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Bryne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?" "I'd say, bone cancer in children? What's that about?" Fry replied. "How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil." "Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say." |
peteolcott <Here@Home>: Sep 14 05:18PM -0500 On 9/14/2019 4:52 PM, Mr Flibble wrote: >> of the above two sequences of points. > There is always a number which is half the size of the previous number so my assertion stands: "infinitesimal number" is a nonsense concept. > /Flibble You reasoning goes like this: There has never been an X, therefore there never will be an X. -- Copyright 2019 Pete Olcott All rights reserved "Great spirits have always encountered violent opposition from mediocre minds." Albert Einstein |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Sep 14 11:30PM +0100 On 14/09/2019 23:18, peteolcott wrote: >> /Flibble > You reasoning goes like this: > There has never been an X, therefore there never will be an X. Eh? I think you need to take your meds m8. /Flibble -- "Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin "You won't burn in hell. But be nice anyway." – Ricky Gervais "I see Atheists are fighting and killing each other again, over who doesn't believe in any God the most. Oh, no..wait.. that never happens." – Ricky Gervais "Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Bryne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?" "I'd say, bone cancer in children? What's that about?" Fry replied. "How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil." "Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say." |
peteolcott <Here@Home>: Sep 14 05:42PM -0500 On 9/14/2019 5:30 PM, Mr Flibble wrote: >> There has never been an X, therefore there never will be an X. > Eh? I think you need to take your meds m8. > /Flibble Resorting to ad hominem as you did is evidence that I am correct because people resort to ad hominem when then run out of reasoning. -- Copyright 2019 Pete Olcott All rights reserved "Great spirits have always encountered violent opposition from mediocre minds." Albert Einstein |
Nikki Locke <nikki@trumphurst.com>: Sep 14 10:23PM Available C++ Libraries FAQ URL: http://www.trumphurst.com/cpplibs/ This is a searchable list of libraries and utilities (both free and commercial) available to C++ programmers. If you know of a library which is not in the list, why not fill in the form at http://www.trumphurst.com/cpplibs/cppsub.php Maintainer: Nikki Locke - if you wish to contact me, please use the form on the website. |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 14 07:54PM +0200 Does anyone know why allocator<T>::rebind<U> was removed with C++20? I think it's rather convenient for initializing an allocator from a "base"-allocator, i.e. you could for example do this: #include <memory> using my_allocator = std::allocator<char>; using derived_allocator = my_allocator::rebind<int>::other; my_allocator ma; derived_allocator da( ma ); This might not make much sense for std:allocator, but imagine you have a class which needs allocators for different types and you don't want to declare each allocator-type as a templae-parameter. I need something like this for a special kind of NUMA-allocator where there's only a single allocator-type given through a tem- plate-parameter and all other allocators are derived from that in the above manner. But how can I do this with C++20? |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 14 08:49PM +0200 > using derived_allocator = my_allocator::rebind<int>::other; > my_allocator ma; > derived_allocator da( ma ); I've got it; now it works like this: #include <memory> using my_allocator = std::allocator<char>; using derived_allocator = std::allocator_traits<my_allocator>::rebind_alloc<int>; my_allocator ma; derived_allocator da( ma ); This unfortunately means that I've to specialize std::allocator_traits for my NUMA-allocator. That's more work than simply implementing rebind for my NUMA-allocator-class. |
Bo Persson <bo@bo-persson.se>: Sep 14 10:49PM +0200 On 2019-09-14 at 20:49, Bonita Montero wrote: > This unfortunately means that I've to specialize std::allocator_traits > for my NUMA-allocator. That's more work than simply implementing rebind > for my NUMA-allocator-class. No, you don't really have to do all the extra work. std::allocator_traits is supposed to be smart enough to use the allocator's rebind member, if there is one. Or otherwise provide a default. http://eel.is/c++draft/allocator.traits.types#11 Bo Persson |
Keith Thompson <kst-u@mib.org>: Sep 13 05:17PM -0700 > Il 13/09/19 20:29, Keith Thompson ha scritto: [...] > (and auto variables), isnt' it ? Many years ago I used to > read about the "heap" for async. allocated memory, like the > one got by malloc ... That was an example. If you call a library function, any local variables in that function are allocated on the stack associated with the calling process. The heap, the region of memory managed by malloc and free or by new and delete, is handled the same way. It's all associated with the process, and a new process is created every time you run your program. (The standard doesn't necessarily refer to "stack" or "heap", but this is typical for most operating systems.) As far as memory allocation is concerned (not including space for executable code), any library functions act just like functions in your program. If you use static libraries, they're incorporated directly into your program, as if you had defined them yourself. Dynamic libraries by design work the same way, except that space for the code is allocated only once. > was sure it had necessarily to do with the memory manager of > the OS. In the end a memory leak in the process should > bubble up until it. Ah, I wasn't sure what "[Nix]" meant. > if using plain malloc and no particular privilege or other > tecniques (i.g. to become a daemon and remain loaded), to > produce a memory leak. A process can leak memory internally, keeping it allocated (and not allowing other processes to use it) as long as the process is running. while (true) { malloc(1000); // don't do this } Operating systems typically place some configurable limit on how much memory a process can allocate -- but you can bypass that by running multiple processes. One unprivileged user typically *can* bog down a system, if not crash it. But when a process terminates, either because it finishes, or it crashes, or something kills it, all allocated memory (stack and heap) is released. That's the operating system's job. [...] -- Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst> Will write code for food. void Void(void) { Void(); } /* The recursive call of the void */ |
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 14 01:10PM +0200 Il 14/09/19 02:17, Keith Thompson ha scritto: > That was an example. If you call a library function, any local > variables in that function are allocated on the stack associated with > the calling process. okay, I just wanted to check the "detail" :) > and a new process is created every time you run your program. > (The standard doesn't necessarily refer to "stack" or "heap", but this > is typical for most operating systems.) reasonable : an user program cannot force the host OS to behave like he says, the contrary holds > As far as memory allocation is concerned (not including space for > executable code), any library functions act just like functions in yo > program. If you use static libraries, they're incorporated directly > into your program, as if you had defined them yourself. intresting detail, I had totally missed this aspect. Yes I would statically link, actually, so the question I posed initially vanishes per se >> the OS. In the end a memory leak in the process should >> bubble up until it. > Ah, I wasn't sure what "[Nix]" meant. sorry ... I seldom use that nickname, I've learnt it here on usenet > But when a process terminates, either because it finishes, or it > crashes, or something kills it, all allocated memory (stack and heap) > is released. That's the operating system's job. perfect TY -- 1) Resistere, resistere, resistere. 2) Se tutti pagano le tasse, le tasse le pagano tutti Soviet_Mario - (aka Gatto_Vizzato) |
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 14 02:04PM +0200 Il 13/09/19 19:55, Soviet_Mario ha scritto: >>> the same size that, always in principle, can be malloc-ed in >>> a contiguous block and not necessarily as a sparse jagged >>> array). about the former suppositions ... from man pages I got the following usage example int main(void) { struct dirent **namelist; int n; n = scandir(".", &namelist, NULL, alphasort); if (n == -1) { perror("scandir"); exit(EXIT_FAILURE); } while (n--) { printf("%s\n", namelist[n]->d_name); free(namelist[n]); } free(namelist); exit(EXIT_SUCCESS); } now looking at the FREEing pattern : free called for each dirent entry and later free called again with the pointer I'm becoming convinced that internally scandir does not pre-alloc a monolythic array but actually a jagged array and a further array of pointers (to each dirent structure). apart from the example, no documentation was reported about HOW to release resources allocated by SCANDIR. Initially I had thought to perform just the last outermost free (namelist) on the whole array of pointers. But the question to avoid a possibly corrupted "half allocated" state arises again if SCANDIR operates allocating bit by bit :\ -- 1) Resistere, resistere, resistere. 2) Se tutti pagano le tasse, le tasse le pagano tutti Soviet_Mario - (aka Gatto_Vizzato) |
Paavo Helde <myfirstname@osa.pri.ee>: Sep 14 05:34PM +0300 On 14.09.2019 15:04, Soviet_Mario wrote: > pointers (to each dirent structure). > apart from the example, no documentation was reported about HOW to > release resources allocated by SCANDIR. The man page for scandir (not SCANDIR!) clearly says: "Entries [..] are stored in strings allocated via malloc(3) [...] and collected in array namelist which is allocated via malloc(3)." That's all the documentation you need to know about how to release the results, as for each malloc there has to be a corresponding free(). As scandir() cannot release results by itself before returning them to the caller, the caller will need to do this by itself. > Initially I had thought to perform just the last outermost free > (namelist) on the whole array of pointers. Releasing an array of raw pointers does not release the memory they are pointing to, neither in C nor in C++. For starters, the free() function just takes a 'void*' argument to some memory block, so it would not have any idea that it is an array of pointers it is releasing, not to speak about if and when to do something special about these pointers. > But the question to avoid a possibly corrupted "half allocated" state > arises again if SCANDIR operates allocating bit by bit :\ That's none of your concern. Presumably, if the memory gets exhausted in the middle of operation, scandir() will release everything what it has allocated so far, and return -1 to indicate a failure. Thankfully, in C++ one does not have to worry about such low-level issues. One just pushes strings one-by-one to a local std::vector object and if an exception like std::bad_alloc is thrown it the middle of the operation, the vector and its contents get released automatically. |
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 14 08:05PM +0200 Il 14/09/19 16:34, Paavo Helde ha scritto: >> about HOW to >> release resources allocated by SCANDIR. > The man page for scandir (not SCANDIR!) I used bold :) > "Entries [..] are stored in strings allocated via malloc(3) > [...] and collected in array namelist which is allocated via > malloc(3)." true, I missed that point :\ >> (namelist) on the whole array of pointers. > Releasing an array of raw pointers does not release the > memory they are pointing to, neither in C nor in C++. For I know that, but I wrongly thought it would have been unnecessary (I thought that a single big chunk of same-sized entries had been allocated, instead) > to some memory block, so it would not have any idea that it > is an array of pointers it is releasing, not to speak about > if and when to do something special about these pointers. yes I know that, I did not expect some smart recursion. I just thought of a monolythic array instead of a sparse one. > exhausted in the middle of operation, scandir() will release > everything what it has allocated so far, and return -1 to > indicate a failure. presumably means surely ? > local std::vector object and if an exception like > std::bad_alloc is thrown it the middle of the operation, the > vector and its contents get released automatically. yes for what I code manually I'm doing .push_back on std::vector <std::string> and catching exceptions. But I was not sure about the management. I'll rely on the fact that SCANDIR is to do complete cleanup, shoud it run short of ram halfway. -- 1) Resistere, resistere, resistere. 2) Se tutti pagano le tasse, le tasse le pagano tutti Soviet_Mario - (aka Gatto_Vizzato) |
Paavo Helde <myfirstname@osa.pri.ee>: Sep 14 11:15PM +0300 On 14.09.2019 21:05, Soviet_Mario wrote: > I know that, but I wrongly thought it would have been unnecessary (I > thought that a single big chunk of same-sized entries had been > allocated, instead) This would mean to copy over and then release all the individual strings allocated by malloc(), meaning that the fact they were initially allocated by malloc() would not be worth mentioning in the documentation, and secondly it would reduce performance, which would be a no-no in such a low-level function. >> in the middle of operation, scandir() will release everything what it >> has allocated so far, and return -1 to indicate a failure. > presumably means surely ? Surely, modulo the unlikely bugs in the implementation. |
David Brown <david.brown@hesbynett.no>: Sep 14 01:36PM +0200 On 13/09/2019 15:47, Bonita Montero wrote: >> functionality to "(PI) -1". >> And like the C-style cast, it has implementation-dependent behaviour. > Sorry, I had tomatoes on my eyes when I posted thi. That is a nice way to put it. Is that a direct translation of an idiom in your native tongue? > The above came into my mind shortly after I posted this. We all do that sometimes. |
Bonita Montero <Bonita.Montero@gmail.com>: Sep 14 07:05PM +0200 >> Sorry, I had tomatoes on my eyes when I posted thi. > That is a nice way to put it. > Is that a direct translation of an idiom in your native tongue? Yes, that's a saying in my language for something you haven't seen but what should be seen obviously. I already thought that this isn't directly translatable, but I also thought the meaning could be guessed easily. |
woodbrian77@gmail.com: Sep 13 09:21PM -0700 On Friday, September 13, 2019 at 1:29:07 AM UTC-5, Öö Tiib wrote: > we "use". It automates tedious tasks in development process. > On ideal case fully. Set up and forget. So there are no much > opportunities even to show ads to users of good CI. They can have ads on their documentation. Also if your build fails and you click on their page to find out more info.... Ebenezer ads are better than other ads. I put ads at the bottom of the page rather than the top or side. That's less intrusive. https://www.reddit.com/r/cpp_questions/comments/d3bcgl/what_continuous_integration_to_use/ |
David Brown <david.brown@hesbynett.no>: Sep 14 01:34PM +0200 >> complexity at all. > Yeah. If a refactoring leads to a smaller binary size and > fewer lines of code, I'll probably use it. The size of the final binary is only loosely correlated with the size of the source code. And the size of the source code is only loosely correlated with the complexity of the source code. Executable size may be a useful metric in some circumstances, but it is not in itself a useful indicator of code complexity. (Nor is code complexity a particularly useful metric either.) |
Pavel <pauldontspamtolk@removeyourself.dontspam.yahoo>: Sep 13 11:38PM -0400 Szyk Cech wrote: > bool lResult(true); > for(int i(0); i < 5; ++i) > lArray[i].join(); Code might hang here because these 5 threads are waiting for a signal but the signal isonly sent by up() which is called below in the same thread. On a side note, down() shall not really decrease the semaphore until it is > 0 and it shall wait for it in a loop in this non-exact-waiting test. Sub-Note: sometimes (but I think, not in your case) it is correct to wait for signal not in a loop -- but this is only when the protocol is "exact" i.e. you know exactly which signals (notifications) every thread is supposed to get. I think I posted some example here while ago with an example of exact waiting for dining philosophers problem -- see class EventQueue in https://groups.google.com/forum/#!original/comp.lang.c++/sV4WC_cBb9Q/PAIxbypUCAAJ . HTH -Pavel |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Sep 13 10:07PM -0700 On 9/13/2019 8:38 PM, Pavel wrote: > is > 0 and it shall wait for it in a loop in this non-exact-waiting test. > Sub-Note: sometimes (but I think, not in your case) it is correct to > wait for signal not in a loop -- Using condition variables directly? If the code thinks that the return from a condvar wait means a signal was actually sent, well, this is mistaken. However, there is something called the eventcount, that can abuse a condvar wait. The predicate is atomic. [...] |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Sep 13 10:17PM -0700 On 9/13/2019 10:07 PM, Chris M. Thomasson wrote: > On 9/13/2019 8:38 PM, Pavel wrote: [...] > from a condvar wait means a signal was actually sent, well, this is > mistaken. However, there is something called the eventcount, that can > abuse a condvar wait. The predicate is atomic. The loop is different in an eventcount, iirc its something like: imagine trying to pop from any lock-free collection, and you want to make it wait on an empty condition, well: // quick pseudo-code ____________________________________________ eventcount ec; node* n = null; // FAST PATH while ((n = try_pop()) == null) { // SLOW PATH eventcount_key eckey = ec.begin(); if ((n = try_pop()) != null) { ec.leave(eckey); break; } // wait for it! ec.wait(eckey); } // n is an actual node we just popped! nice. :^) ____________________________________________ The double check for the predicate is required here. An eventcount is like a condition variable for lock-free algorithms. |
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: Sep 13 11:01PM -0700 On 9/13/2019 10:17 PM, Chris M. Thomasson wrote: > ____________________________________________ > The double check for the predicate is required here. An eventcount is > like a condition variable for lock-free algorithms. Wrt the use case above, here is a crude eventcount (called waitset) implementation that uses Relacy Race Detector, but its easy to turn into pure C++11. I put the standalone membars in some macros, sorry for that! #define mb_relaxed std::memory_order_relaxed #define mb_consume std::memory_order_consume #define mb_acquire std::memory_order_acquire #define mb_release std::memory_order_release #define mb_acq_rel std::memory_order_acq_rel #define mb_seq_cst std::memory_order_seq_cst #define mb_relaxed_fence() std::atomic_thread_fence(mb_relaxed) #define mb_consume_fence() std::atomic_thread_fence(mb_consume) #define mb_acquire_fence() std::atomic_thread_fence(mb_acquire) #define mb_release_fence() std::atomic_thread_fence(mb_release) #define mb_acq_rel_fence() std::atomic_thread_fence(mb_acq_rel) #define mb_seq_cst_fence() std::atomic_thread_fence(mb_seq_cst) class waitset { std::mutex m_mutex; std::condition_variable m_cond; std::atomic<bool> m_waitbit; VAR_T(unsigned) m_waiters; public: waitset() : m_waitbit(false), m_waiters(0) { } ~waitset() { bool waitbit = m_waitbit.load(mb_relaxed); unsigned waiters = VAR(m_waiters); RL_ASSERT(! waitbit && ! waiters); } private: void prv_signal(bool waitbit, bool broadcast) { if (! waitbit) return; m_mutex.lock($); unsigned waiters = VAR(m_waiters); if (waiters < 2 || broadcast) { m_waitbit.store(false, mb_relaxed); } m_mutex.unlock($); if (waiters) { if (! broadcast) { m_cond.notify_one($); } else { m_cond.notify_all($); } } } public: unsigned wait_begin() { m_mutex.lock($); m_waitbit.store(true, mb_relaxed); mb_seq_cst_fence(); return 0; } bool wait_try_begin(unsigned& key) { if (! m_mutex.try_lock($)) return false; m_waitbit.store(true, mb_relaxed); mb_seq_cst_fence(); return true; } void wait_cancel(unsigned key) { unsigned waiters = VAR(m_waiters); if (! waiters) { m_waitbit.store(false, mb_relaxed); } m_mutex.unlock($); } void wait_commit(unsigned key) { ++VAR(m_waiters); m_cond.wait(m_mutex, $); if (! --VAR(m_waiters)) { m_waitbit.store(false, mb_relaxed); } m_mutex.unlock($); } public: void signal() { mb_seq_cst_fence(); bool waitbit = m_waitbit.load(std::memory_order_relaxed); prv_signal(waitbit, false); } void broadcast() { mb_seq_cst_fence(); bool waitbit = m_waitbit.load(std::memory_order_relaxed); prv_signal(waitbit, true); } }; |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment