| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 12:08AM -0700 On 5/25/2021 12:07 AM, Chris M. Thomasson wrote: > The predicate is polled in a condvar as well, yawn. Getting tired. Just > try to think about it some more. I will get back to you tomorrow. I have > some other work to do. Will but out Relacy. Its fun to use. Bust out Relacy! Damn Typos! |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 25 12:09AM -0700 On 5/25/2021 12:04 AM, Bonita Montero wrote: > The discussion was about whether mutexes could be realized only > with as counting mutexes and not whether they could be realized > with compare-and-swap only. You have a weak memory ! You said they must use a semaphore, or CAS or something. Nope, you are wrong. And we, Kaz and I had to correct you. I remember it. Anyway, I have to take a look at some other code right now. Will get back to you in a new thread sometime tomorrow, deal? Okay. |
| MrSpook_43zpj5@zrdy.biz: May 25 09:02AM On Mon, 24 May 2021 18:15:31 +0200 >> English comprehension problem? >I've shown that fork()ing is much more slower than to create a thread. >And even more a lot slower when pages are CoWd. For the use cases of fork the time of creation is irrelevant. |
| MrSpook_wBhaki@w1dhf2q0pnagn57.eu: May 24 08:26AM On Fri, 21 May 2021 16:25:34 GMT >>Your knowledge is very superficial when it comes to parallelization ! >You are both arguing past each other. >Both forking and multithreading are useful in the right situation. Thats all I was saying. But for people who've only ever developed on Windows all they know is multithreading and see it as the answer to everything. |
| Bonita Montero <Bonita.Montero@gmail.com>: May 24 03:37PM +0200 > Copy-on-write takes cycles, of course - but you only need to do the copy > on pages that are written. ... Unfortunately you inherit the original fragmented heap of the parent -process so that there might be a lot of Cows. > in threading), and I expect forking to be more efficient if you need > separation for reliability or security (since separation is the default > for forking). Why should forking ever be faster ? There's no reason for this. If you have decoupled threads which don't synchronize or simpley read -share memory, they could perorm slightly faster since a context -switch doesn't always include a TLB-flush. > You come from a Windows world, where forking is not supported. > In the *nix world, it's a different matter. My statement was a general statement - threaded applications are easier to write and fork()ing is at best equally performant, but usually less. > efficient than multi-processing, or that it is more common - I am merely > disagreeing with your blanket generalisations about multi-threading > /always/ being better and /always/ being used. It's better almost every time. |
| Bonita Montero <Bonita.Montero@gmail.com>: May 25 06:40AM +0200 >> Why ? > About the "monitor-objects are the fastest" comment... ;^) I'm just ignoring lock-free queues because they're impracicable because you've to poll. |
| "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com>: May 24 10:07PM -0700 On 5/24/2021 9:40 PM, Bonita Montero wrote: >> About the "monitor-objects are the fastest" comment... ;^) > I'm just ignoring lock-free queues because they're > impracicable because you've to poll. Why do have to poll? |
| Bonita Montero <Bonita.Montero@gmail.com>: May 25 08:19AM +0200 >> queue is full (but boost shows that the producer-side is also pos- >> sible with blocking). > Why? Because lock-free is without waiting in the kernel. |
| MrSpook_ry@939_6htz773e0qeya.eu: May 27 08:18AM On Wed, 26 May 2021 19:04:56 +0200 >We don't discuss physical threading but how the language presents >threading; and C++11-threading is by far more convenient than pure >pthreads. A Big Mac is convenient, doesn't make it the best meal. The pthreads library is extremely powerful, perhaps the boiler plate setup code can be a bit long winded but its not hard to use. |
| MrSpook_b28s@jxgz6zklebr1.tv: May 27 10:24AM On Thu, 27 May 2021 11:02:11 +0200 >> A Big Mac is convenient, doesn't make it the best meal. ... >Programming pthreads directly has no advantages Presumably you've never had to use 3 level locking or fine grain threading control. Also the lack of proper interoperability with signals makes C++ threading on unix a bit of a toy frankly. >and makes a lot of workd more. A bit, not a lot. |
| Bonita Montero <Bonita.Montero@gmail.com>: May 27 12:33PM +0200 > Presumably you've never had to use 3 level locking or fine grain threading > control. Also the lack of proper interoperability with signals makes C++ > threading on unix a bit of a toy frankly. Signals are a plague. You can't write a libary which does have a Signal-handling which is coordinatet independently from the code it is later embedded into. Both have to be made of a single piece. Signals are even more worse since there can't be different handlers for synchonous signals for different threads. And signal-codd always have a reentrancy problem, that makes them a even bigger plague. And the ABI has to be designed around them (red zone). That's not clean coding. Therefore: Outsource asynchronous signals to differnt threads. Windows has a more powerful handling for something like synchronous signals, Structured Excetion Handling. And for the few asynchronous signals Windows knows, Windows spawns a diffrent thread if a signal happens. And which threading-contol is needed beyond that what C++ provides ? |
| Bonita Montero <Bonita.Montero@gmail.com>: May 27 12:40PM +0200 >> control. Also the lack of proper interoperability with signals makes C++ >> threading on unix a bit of a toy frankly. > Signals are a plague.... And even more: If I use C++-threading the places where signals could occur and where I can't get the EAGAIN are only where I have locking and / or waiting for a condition_variable. But the places where I lock a mutex or wait for a CV with pthreads, Posix mandates you to re-lock the mutex or re-wait for the CV - that's exactly what C++11 -synhroni-zation does also - so there's no difference here. So what do you complain here ? |
| Bonita Montero <Bonita.Montero@gmail.com>: May 27 12:55PM +0200 > re-lock the mutex or re-wait for the CV - that's exactly what C++11 > -synhroni-zation does also - so there's no difference here. So what > do you complain here ? Oh, I'm partitially wrong here: pthread_mutex_wait behaves as described _but_ pthread_cond_wait handles the signal-handler internally and con- tinues waiting afterwards. So there's still nothing different than with C+11-threads ! |
| MrSpook_zdNaq@bltmyc.gov.uk: May 27 03:39PM On Thu, 27 May 2021 12:24:36 +0000 (UTC) >>>dicussion. You are just being an asshole. >> Good morning Mr Happy, things going well in Finland today? >Fuck off, asshole. Oh dear, another bad day? Have a lie down and cuddle the therapy teddy. |
| MrSpook_b_x@ukpge.org: May 27 03:54PM On Thu, 27 May 2021 13:14:36 +0200 >>> Signals are a plague. You can't write a libary which does have a >> Says the windows programmer. >Signals are simply a bad concept. No one would invent them today. Behold! Our mighty sage has spoken - let it be known that interrupts are a bad idea! Whatever you say sweetie. |
| Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 23 06:11PM -0700 Google Groups is broken. If you post to comp.lang.c++, it quietly drops the "++" and posts to comp.lang.c. (You *might* be able to write the newsgroup name as "comp.lang.c%2b%2b"; can someone confirm that?) I've cross-posted this reply to both newsgroups, with followups to comp.lang.c++. Any followups to this should go to comp.lang.c++. (Consider using a Usenet server such as news.eternal-september.org with a newsreader such as Thunderbird or Gnus (the latter runs under Emacs).) (I've posted new text at the top of the quoted text to be sure it isn't missed. The usual convention here is for new text to go *below* quoted text.) -- Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com Working, but not speaking, for Philips Healthcare void Void(void) { Void(); } /* The recursive call of the void */ |
| Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 27 03:33PM -0700 > Why? size_t is guaranteed to hold the size of any object, which implies that > it must be large enough to accomodate an object the size of the virtual address > space. Generally it's minimum size in bits is the same as long. That's likely to be true, but it's not absolutely guaranteed. size_t is intended to hold the size of any single object, but it may not be able to hold the sum of sizes of all objects or the size of the virtual address space. An implementation might restrict the size of any single object to something smaller than the size of the entire virtual address space. (Think segments.) Also, I haven't found anything in the standard that says you can't at least try to create an object bigger than SIZE_MAX bytes. calloc(SIZE_MAX, 2) attempts to allocate such an object, and I don't see a requirement that it must fail. If an implementation lets you define a named object bigger than SIZE_MAX bytes, then presumably applying sizeof to it would result in an overflow, and therefore undefined behavior. Any reasonable implementation will simply make size_t big enough to hold the size of any object it can create, but I don't see a requirement for it. -- Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com Working, but not speaking, for Philips Healthcare void Void(void) { Void(); } /* The recursive call of the void */ |
| Bonita Montero <Bonita.Montero@gmail.com>: May 25 12:56PM +0200 You've got a 2GB address-space and not a contignous piece of memory which fits to your 900MB. |
| MrSpook_ddZgr4t4@okc9_pd48oig5.info: May 26 07:15AM On Tue, 25 May 2021 19:57:47 +0300 >> real memory pages storing the string could be all over the place. >Exactly. And if the memory allocator cannot find a free range of >contiguous 900M addresses, guess what happens. Things would have to be pretty badly FUBARed for the VM to run out of virtual memory address space on a 64 bit system given the 16 exabyte max size! |
| MrSpook_dw5lvA4g@kak0_42x.edu: May 26 08:29AM On Wed, 26 May 2021 10:54:27 +0300 >> memory address space on a 64 bit system given the 16 exabyte max size! >It looks like you have overlooked the small fact that the OP is having a >32-bit program and does not want to upgrade to 64-bit at this moment. Fair enough. In which case running out of address space would be pretty easy given modern application sizes. |
| "Alf P. Steinbach" <alf.p.steinbach@gmail.com>: May 26 06:33PM +0200 On 2021-05-25 03:46, Lynn McGuire wrote: > some slop > fseek (pOutputFile, 0, SEEK_SET); > outputFileBuffer.reserve (outputFileLength); [snip] In the above code `ftell` will fail in Windows if the file is 2GB or more, because in Windows, even in 64-bit Windows, the `ftell` return type `long` is just 32 bits. However, the C++ level iostreams can report the file size correctly: ---------------------------------------------------------------------------- #include <stdio.h> // fopen, fseek, ftell, fclose #include <stdlib.h> // EXIT_... #include <iostream> #include <fstream> #include <stdexcept> // runtime_error using namespace std; auto hopefully( const bool e ) -> bool { return e; } auto fail( const char* s ) -> bool { throw runtime_error( s ); } struct Is_zero {}; auto operator>>( int x, Is_zero ) -> bool { return x == 0; } const auto& filename = "large_file"; void c_level_check() { struct C_file { FILE* handle; ~C_file() { if( handle != 0 ) { fclose( handle ); } } }; auto const f = C_file{ fopen( ::filename, "rb" ) }; hopefully( !!f.handle ) or fail( "fopen failed" ); fseek( f.handle, 0, SEEK_END ) >> Is_zero() or fail( "fseek failed, probably rather biggus filus" ); const long pos = ftell( f.handle ); hopefully( pos >= 0 ) or fail( "ftell failed" ); cout << "`ftell` says the file is " << pos << " byte(s)." << endl; } void cpp_level_check() { auto f = ifstream( ::filename, ios::in | ios::binary ); f.seekg( 0, ios::end ); const ifstream::pos_type pos = f.tellg(); hopefully( pos != -1 ) or fail( "ifstream::tellg failed" ); cout << "`ifstream::tellg` says the file is " << pos << " bytes." << endl; } void cpp_main() { try { c_level_check(); } catch( const exception& x ) { cerr << "!" << x.what() << endl; cpp_level_check(); } } auto main() -> int { try { cpp_main(); return EXIT_SUCCESS; } catch( const exception& x ) { cerr << "!" << x.what() << endl; } return EXIT_FAILURE; } ------------------------------------------------------------------------------- When I tested this with `large_file` as a copy of the roughly 4GB "Bad.Boys.for.Life.2020.1080p.WEB-DL.DD5.1.H264-FGT.mkv", I got [c:\root\dev\explore\filesize] > b !ftell failed `ifstream::tellg` says the file is 4542682554 bytes. - Alf |
| Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 31 02:43PM -0700 > "int64_t", but someone would first have to check if it affected any real > implementations before making such a change. But yes, that might be a > way out and a way forward. [...] That would allow intmax_t to be 128 bits on implementations with 128-bit long long (are there any?), which seems like a good idea. I think the point of both these proposals is purely for backward compatibility, avoiding breaking code that already uses [u]intmax_t. Both of them destroy the point of intmax_t, providing a type that's guaranteed to be the longest integer type. Should intmax_t be deprecated? Perhaps some future version of C might have enough capabilities to allow defining a longest integer type without causing ABI issues the way intmax_t did. And since, as far as I've been able to tell, no implementation supports extended integer types, I wonder if they should be reconsidered. -- Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com Working, but not speaking, for Philips Healthcare void Void(void) { Void(); } /* The recursive call of the void */ |
| Keith Thompson <Keith.S.Thompson+u@gmail.com>: May 31 02:44PM -0700 Keith Thompson <Keith.S.Thompson+u@gmail.com> writes: [...] > Perhaps some future version of C might have enough capabilities to > allow defining a longest integer type without causing ABI issues > the way intmax_t did. And I did it again. s/C/C++/, or s/comp.lang.c++/comp.lang.c/. [...] -- Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com Working, but not speaking, for Philips Healthcare void Void(void) { Void(); } /* The recursive call of the void */ |
| "daniel...@gmail.com" <danielaparker@gmail.com>: May 31 03:20PM -0700 On Monday, May 31, 2021 at 5:43:23 PM UTC-4, Keith Thompson wrote: > Both of them destroy the point of intmax_t, providing a type that's > guaranteed to be the longest integer type. Should intmax_t be > deprecated? Yes. "Give me the biggest integer type there is" is not a reasonable thing to ask for, in any code that is intended to be portable across platforms or over time on the same platform. You may as well have intwhatever_t. Daniel |
| Lynn McGuire <lynnmcguire5@gmail.com>: May 24 08:46PM -0500 I am getting std::bad_alloc from the following code when I try to reserve a std::string of size 937,180,144: std::string filename = getFormsMainOwner () -> getOutputFileName (); FILE * pOutputFile = nullptr; errno_t err = fopen_s_UTF8 ( & pOutputFile, filename.c_str (), "rt"); if (err == 0) { std::string outputFileBuffer; // need to preallocate the space in case the output file is a gigabyte or more, PMR 6408 fseek (pOutputFile, 0, SEEK_END); size_t outputFileLength = ftell (pOutputFile) + 42; // give it some slop fseek (pOutputFile, 0, SEEK_SET); outputFileBuffer.reserve (outputFileLength); Any thoughts here on how to handle the std::bad_alloc in std::string reserve ? Thanks, Lynn |
| You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment