- Another test of zapcc - 2 Updates
- Custom Memory Manager & placement new - 1 Update
- Einsteigerfreundliche GUI - 2 Updates
- So sorry about that. - 1 Update
- cmsg cancel <o1dflc$afg$3@dont-email.me> - 5 Updates
- && - 2 Updates
- Again about programming... - 1 Update
- What is programming ? - 1 Update
- Read again... - 3 Updates
- What have i learned in Parallel programming during all my years of learning ? - 1 Update
David CARLIER <devnexen@gmail.com>: Nov 27 12:29PM -0800 http://devnexen.blogspot.com/2016/11/better-stronger-faster-there-is-zapcc.html |
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Nov 27 10:36PM +0100 On 27.11.2016 21:29, David CARLIER wrote: > http://devnexen.blogspot.com/2016/11/better-stronger-faster-there-is-zapcc.html That looks like spam. Even your last name looks like spam. How about a few words about what zapcc is and what the test showed, and why you think it's relevant to discuss here or relevant for us to know about? Cheers, - Alf (and no, I didn't follow the link) |
scott@slp53.sl.home (Scott Lurndal): Nov 27 06:06PM >Don't forget if any process writes to the shared memory, all access to >the shared memory must be protected by mutex semaphores. That slows >things down a bit - but it's still a lot faster than any other IPC. There are many ways to ensure correctness of shared data (between threads or between processes) without using traditional mutexes or semaphores. Atomic accesses, for example, can easily be used to maintain shared counters (e.g. gcc __sync_fetch_and_add, __sync_and_and_fetch, et alia) while compare-and-swap can be used to maintain linked lists without locks. Spinlocks, read-copy-update and lock-free algorithms are all common in such uses. As a note, if using mmap(2) with MAP_PUBLIC or shmat(2) between processes with disparate address spaces, you must use special flavors of the pthread mutex with the PTHREAD_PROCESS_SHARED (pthread_mutexattr_setpshared) attribute or use traditional unix semop(2) functions. Regular pthread mutexes are normally limited to within a process. |
Christian Steins <cs01@quantentunnel.de>: Nov 27 03:37PM +0100 Am 26.11.2016 um 17:33 schrieb Christian Steins: >> neoGFX is coming soon! > Cool, thanks, will check it out. > http://neogfx.org/ The TestApp (test.exe) on that site is not working (blank window). Chris |
"Öö Tiib" <ootiib@hot.ee>: Nov 27 07:02AM -0800 On Sunday, 27 November 2016 16:37:18 UTC+2, Christian Steins wrote: > > Cool, thanks, will check it out. > > http://neogfx.org/ > The TestApp (test.exe) on that site is not working (blank window). Most likely it is because MS has screwed up OpenGL support of Windows and so people have to install and configure that support manually. |
Ramine <ramine@1.1>: Nov 26 10:17PM -0500 Hello, Sorry i have posted about programming in general.. I think i must not post those posts here , comp.programming is well suited for that. So, sorry about that. Thank you, Amine Moulay Ramdane. |
bleachbot <bleachbot@httrack.com>: Nov 27 03:17AM +0100 |
bleachbot <bleachbot@httrack.com>: Nov 27 03:29AM +0100 |
bleachbot <bleachbot@httrack.com>: Nov 27 03:55AM +0100 |
bleachbot <bleachbot@httrack.com>: Nov 27 04:39AM +0100 |
bleachbot <bleachbot@httrack.com>: Nov 27 05:17AM +0100 |
Popping mad <rainbow@colition.gov>: Nov 27 03:06AM What exactly is this T& operator=(T&& other) // move assignment { assert(this != &other); // self-assignment check not required delete[] mArray; // delete this storage mArray = std::exchange(other.mArray, nullptr); // leave moved-from in valid state return *this; } specifically T&& other http://en.cppreference.com/w/cpp/language/operators |
Ian Collins <ian-news@hotmail.com>: Nov 27 04:43PM +1300 On 11/27/16 04:06 PM, Popping mad wrote: > } > specifically T&& other > http://en.cppreference.com/w/cpp/language/operators http://en.cppreference.com/w/cpp/language/reference and http://en.cppreference.com/w/cpp/language/move_assignment -- Ian |
Ramine <ramine@1.1>: Nov 26 09:40PM -0500 Hello, Again about programming... If you have read my previous post, that we are always learning in programming how to scale to a much bigger and complex problem.. and by doing it, we are structuring ,like in mathematical logic, our thinking and reasonning, also by using technics of programming like how to calculate the time complexity and space complexity etc. Take for example reusability, intelligence and scalability are inherent also to reusability in programming, if you reuse smartly you will be able to scale fast and to become smarter, this is one way to prove my assertion above. Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: Nov 26 08:56PM -0500 Hello.... What is programming ? I think the very important goal of programming is to make us scale. Take for example mathematical logic, you will learn for example in mathematical logic that: p -> q is equivalent to: not(q) -> not(p) because it is infered mathematically from: p -> q is equivalent to: not(p) or q So what is exactly we are doing with this mathematical logic? It permits us to scale, that means it permit us to make our reasonning more complex by formalizing it with mathematical logic. Programming is the same: We are always learning in programming how to scale to a much bigger and complex problem.. and by doing it, we are structuring ,like in mathematical logic, our thinking and reasonning, also by using technics of programming like how to calculate the time complexity and space complexity.. Thank you, Amine Moulay Ramdane. |
Ramine <ramine@1.1>: Nov 26 08:29PM -0500 Hello.... What have i learned in Parallel programming during all my years of learning ? There is an important factor in programming that is called scalability, and that's where i have specialized in parallel programming, for example you have the criterion that is called maintainability, and this factor called maintainability is in relation with scalability, because if you want to make your code more maintainable you have to learn the guidelines and the programming patterns to know how to scale more on the the criterion of the cost of the time that you need to maintain your program, and this is what you are learning on this C++ group also with templates etc., reusability is another criterion that is in relation with scalability, because what you are trying to do is make your classes and functions and programs reusable so that you can scale on productivity for example.. portability is also another criterion that is in relation with scalability, because when your code is portable you will be able to run your code on more operating systems and hardwares and that's scalability also.. on parallel programming that's the same, you want your code and program to be scalable. So here is what i have learned more, and how i have made my parallel programming more scalable: Take for example my efficient Threadpool engine with priorities that scales well, here it is: https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-with-priorities-that-scales-well So if you take a look at it, here is that i have made to make it scale well: 1- minimize at best the cache-lines transfers, by using multiple queues or multiple stacks, and by using lock-striping and by minimizing the the cache-line transfers inside the queues or stacks by using a better algorithm. 2- It uses work-stealing to be more efficient. 3- It can use processor groups on windows, so that it can use more than 64 logical processors , so that it scales well. 4- It's NUMA-aware and NUMA efficient. That's how i have learned more to make my programs more scalable. Thank you, Amine Moulay Ramdane. |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Nov 27 02:34AM On 27/11/2016 01:29, Ramine wrote: > more than 64 logical processors , so that it scales well. > 4- It's NUMA-aware and NUMA efficient. > That's how i have learned more to make my programs more scalable. You posted the same spam post twice so you could correct a single spelling mistake of "were" instead of "where"? Are you fucking mental? Mate, fuck off. /Flibble |
Ian Collins <ian-news@hotmail.com>: Nov 27 03:41PM +1300 On 11/27/16 03:34 PM, Mr Flibble wrote: > You posted the same spam post twice so you could correct a single > spelling mistake of "were" instead of "where"? Are you fucking mental? > Mate, fuck off. Abusing spam bots now? -- Ian |
Ramine <ramine@1.1>: Nov 26 08:18PM -0500 Hello.... What have i learned in Parallel programming during all my years of learning ? There is an important factor in programming that is called scalability, and that's were i have specialized in parallel programming, for example you have the criterion that is called maintainability, and this factor called maintainability is in relation with scalability, because if you want to make your code more maintainable you have to learn the guidelines and the programming patterns to know how to scale more on the the criterion of the cost of the time that you need to maintain your program, and this is what you are learning on this C++ group also with templates etc., reusability is another criterion that is in relation with scalability, because what you are trying to do is make your classes and functions and programs reusable so that you can scale on productivity for example.. portability is also another criterion that is in relation with scalability, because when your code is portable you will be able to run your code on more operating systems and hardwares and that's scalability also.. on parallel programming that's the same, you want your code and program to be scalable. So here is what i have learned more, and how i have made my parallel programming more scalable: Take for example my efficient Threadpool engine with priorities that scales well, here it is: https://sites.google.com/site/aminer68/an-efficient-threadpool-engine-with-priorities-that-scales-well So if you take a look at it, here is that i have made to make it scale well: 1- minimize at best the cache-lines transfers, by using multiple queues or multiple stacks, and by using lock-striping and by minimizing the the cache-line transfers inside the queues or stacks by using a better algorithm. 2- It uses work-stealing to be more efficient. 3- It can use processor groups on windows, so that it can use more than 64 logical processors , so that it scales well. 4- It's NUMA-aware and NUMA efficient. That's how i have learned more to make my programs more scalable. Thank you, Amine Moulay Ramdane. |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment