- Scalable Parallel C++ Conjugate Gradient Linear System Solver Library for Windows and Linux was updated to version 1.66 - 1 Update
- Let's call the <=> operator "trike" because of the math term - 19 Updates
- Scalable parallel implementation of Conjugate Gradient Linear Sparse System Solver library version 1.68 - 1 Update
- Std::any vs std::optional - 2 Updates
- Perl to C++ - 1 Update
- initial value of pointer value in std map ? - 1 Update
Sky89 <Sky89@sky68.com>: May 15 11:12PM -0400 Hello.. Scalable Parallel C++ Conjugate Gradient Linear System Solver Library for Windows and Linux was updated to version 1.66 Author: Amine Moulay Ramdane Description: This library contains a Scalable Parallel implementation of Conjugate Gradient Dense Linear System Solver library that is NUMA-aware and cache-aware, and it contains also a Scalable Parallel implementation of Conjugate Gradient Sparse Linear System Solver library that is cache-aware. Conjugate Gradient is known to converge to the exact solution in n steps for a matrix of size n, and was historically first seen as a direct method because of this. However, after a while people figured out that it works really well if you just stop the iteration much earlier - often you will get a very good approximation after much fewer than n steps. In fact, we can analyze how fast Conjugate gradient converges. The end result is that Conjugate gradient is used as an iterative method for large linear systems today. You can download it from: https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library Please download the zip file and read the readme file inside the zip to know how to use it. Language: GNU C++ and Visual C++ and C++Builder Operating Systems: Windows, Linux, Unix and Mac OS X on (x86) Thank you, Amine Moulay Ramdane. |
legalize+jeeves@mail.xmission.com (Richard): May 14 10:33PM [Please do not mail me a copy of your followup] boltar@cylonHQ.com spake the secret code >The posix API libraries come with every C/C++ compiler installed on every >unix system since god knows when. IMO that makes them standard. Spoken like a true unix bigot. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com> |
boltar@cylonHQ.com: May 15 08:23AM On Mon, 14 May 2018 16:59:33 -0400 >needing to worry a lot about the difficulties of porting data from one >platform to another - but I doubt that dealing with such difficulties >was the main point of your comment. And the HDF libraries are part of the official standard are they? Remove all the parts of your program that use anything except the official standard libraries then see how well they work. |
boltar@cylonHQ.com: May 15 08:26AM On Mon, 14 May 2018 22:33:14 +0000 (UTC) >>The posix API libraries come with every C/C++ compiler installed on every >>unix system since god knows when. IMO that makes them standard. >Spoken like a true unix bigot. You can also use posix on Windows, though since the windows kernel is so limited in certain areas (process spawing & control, IPC, full signal handling) you'll only ever be using a subset. |
Ian Collins <ian-news@hotmail.com>: May 15 08:41PM +1200 > You can also use posix on Windows, though since the windows kernel is so > limited in certain areas (process spawing & control, IPC, full signal > handling) you'll only ever be using a subset. You don't have to now we have threading as part of the standard library.... -- Ian. |
boltar@cylonHQ.com: May 15 10:17AM On Tue, 15 May 2018 20:41:44 +1200 >> limited in certain areas (process spawing & control, IPC, full signal >> handling) you'll only ever be using a subset. >You don't have to now we have threading as part of the standard library.... No sane programmer uses threading if multi process can get the job done just as well. The only reason is so popular is because its the default parallel programming mode in Windows which is historical baggage down to the fact that Windows had to run with the primitive MMUs on the 8086 and 286 and it was easier to implement threading and CMT than proper pre-emptive multi tasking. |
Ian Collins <ian-news@hotmail.com>: May 15 10:52PM +1200 >> You don't have to now we have threading as part of the standard library.... > No sane programmer uses threading if multi process can get the job done just > as well. Yeah right... > The only reason is so popular is because its the default parallel > programming mode in Windows Nope. Threads were popular before windows was inflicted on an unsuspecting world. -- Ian. |
boltar@cylonHQ.com: May 15 11:32AM On Tue, 15 May 2018 22:52:05 +1200 >> No sane programmer uses threading if multi process can get the job done just >> as well. >Yeah right... Well I guess if you want to have to worry about race conditions, deadlocking and threadsafe APIs when you don't have to then go for it. >> programming mode in Windows >Nope. Threads were popular before windows was inflicted on an >unsuspecting world. Obviously the concept had been around long before Windows, but Windows made it popular in general programming because the sort of people who started to code on windows had before then usually only encountered single thread programs in DOS or on 8 bit machines with any "multitasking" being done by interrupts. Few had used unix and even less had used mainframes (where threads started out). |
David Brown <david.brown@hesbynett.no>: May 15 01:39PM +0200 >> Yeah right... > Well I guess if you want to have to worry about race conditions, deadlocking > and threadsafe APIs when you don't have to then go for it. If you think that multi-processing means you don't have to worry about how threads of execution interact - and use locks, queues, messages, etc., appropriately - then you are not ready to write multi-process code. Multi-processing and multi-threading have their advantages and disadvantages. In both cases, you need to understand what you are doing in order to avoid subtle problems. And some tasks are best split as threads, some tasks as processes. > code on windows had before then usually only encountered single thread programs > in DOS or on 8 bit machines with any "multitasking" being done by interrupts. > Few had used unix and even less had used mainframes (where threads started out). Threads have existed in all sorts of systems - and they are used on all sorts of systems. The world is not restricted to *nix and Windows - most operating systems in current use have threads but not processes. It is certainly the case that Windows came quite late to the multi-processing game, and even now multiple processes are much more expensive on Windows than they would be on *nix on the same hardware. This means that where you have a reasonable choice of strategies, it is more common to choose a multi-threading architecture in Windows where you might have picked a multi-processing architecture in *nix. But that is only a bias, not an absolute divide - and *nix software is full of threading. |
boltar@cylonHQ.com: May 15 01:23PM On Tue, 15 May 2018 13:39:08 +0200 >If you think that multi-processing means you don't have to worry about >how threads of execution interact - and use locks, queues, messages, >etc., appropriately - then you are not ready to write multi-process code. None of the above lead to race conditions or deadlocks unless you deliberately code them in. Which is the complete opposite of threading. Perhaps you're the one who needs to read up a bit more on the topic though given your coding is limited to writing data processing functions I'm not entirely surprised you're not too up on the topic. >disadvantages. In both cases, you need to understand what you are doing >in order to avoid subtle problems. And some tasks are best split as >threads, some tasks as processes. Unfortunately some people (including the creators of java) think threads are the answer to everything. They're a solution to a small subset of problems. >Threads have existed in all sorts of systems - and they are used on all >sorts of systems. The world is not restricted to *nix and Windows - >most operating systems in current use have threads but not processes. What is your definition of operating system since almost all embedded systems these days either use a linux, qnx or embedded windows variant and the ones that don't are usually running a single program with no underlying OS to speak of, just ISRs. >you might have picked a multi-processing architecture in *nix. But that >is only a bias, not an absolute divide - and *nix software is full of >threading. It is, and a lot of it is completely unnecessary and complicates the code for little benefit. Sometimes I've wondered if the people who wrote not only didn't have a clue about multi process, but whether they've even heard of multiplexing. When a program fires off a thread just to service a packet you know the coder doesn't have much of a clue. |
David Brown <david.brown@hesbynett.no>: May 15 04:09PM +0200 >> etc., appropriately - then you are not ready to write multi-process code. > None of the above lead to race conditions or deadlocks unless you deliberately > code them in. Which is the complete opposite of threading. No, it is /exactly/ the same in multi-processing and multi-threading. A common deadlock situation is when you have two locks, A and B, and one line of execution acquires A then B, the other tries to acquire B then A. If each gets halfway, then the combined system deadlocks. This is precisely the same if the locks are external file locks shared by processes, mutexes shared by threads within a process, or any other type of synchronisation mechanism. There is /no/ difference. > one who needs to read up a bit more on the topic though given your coding is > limited to writing data processing functions I'm not entirely surprised you're > not too up on the topic. How interesting to hear - my coding is limited to writing data processing functions, is it? Could it perhaps be that you are mixing up posters in your eagerness to try to insult people? >> threads, some tasks as processes. > Unfortunately some people (including the creators of java) think threads are > the answer to everything. They're a solution to a small subset of problems. Threads are not the answer to everything - but they /are/ useful, and a good solution to many things. The same applies to multiple processes. > these days either use a linux, qnx or embedded windows variant and the ones > that don't are usually running a single program with no underlying OS to speak > of, just ISRs. There is a saying - it is better to remain silent and be thought a fool, than to open your mouth and prove it. Over a certain size, Linux is dominant in complex embedded systems. Embedded Windows variants are almost negligible in the market these days - as is QNX, outside a few niche areas. And while most embedded systems are "bare metal" with no OS as such, there is a very large group in the middle with real-time operating systems like FreeRTOS, mbed, Contiki, embOS, eCos, Integrity, MicroC/OS-II, MQX, Neutrino, DSP/BIOS, Nut/OS, OpenRTOS, SafeRTOS, RTEMS, ThreadX, VxWorks, etc. I think without exception these support multiple threads (with cooperative scheduling, or pre-emptive scheduling, or both) but not multiple processes. And though I don't have numbers - I doubt if anyone does - I expect these far outweigh all *nix and Windows systems together, including Android. > have a clue about multi process, but whether they've even heard of multiplexing. > When a program fires off a thread just to service a packet you know the coder > doesn't have much of a clue. Firing off a thread to service a packet is a lot more efficient than firing off a process to service it. Whether it is more or less efficient than handling it in the current thread will depend entirely on the application. Prejudice against threads certainly shows that the coder does not understand them - just as much as an obsession with always using threads. |
James Kuyper <jameskuyper@alumni.caltech.edu>: May 15 10:25AM -0400 > And the HDF libraries are part of the official standard are they? Remove all > the parts of your program that use anything except the official standard > libraries then see how well they work. I'd merely have to include the functional equivalent of the HDF library routines in my own code - which would not require anything non-standard, but would involve a lot of platform-specific configuration, to reflect the different representations used on different platforms for the C standard types. The actual HDF library does use some non-standard platform-specific facilities - but use of those facilities is optional, they are not used when installing HDF on a platform that doesn't support them, so it would be trivial to create a less efficient version of the library that didn't use them at all. Platform-specific configuration is performed by the HDF library installation script, which is fairly complicated, because the HDF library can be installed and work correctly on a wide variety of platforms (but not all). My code is only required to be portable to platforms where it can be installed. It wouldn't be particularly difficult to copy that code, either, since HDF is open-source. I've often had to go deep inside HDF library code to resolve bugs, so I'm fairly familiar with it. They often were bugs in my code causing the HDF code to fail, but on a few occasions I uncovered bugs in the HDF code, which I reported to the HDF Group. Directly copying their code might raise intellectual property rights issues, but since I have no need to actually bother performing such a transformation, I haven't bothered to look into those issues. |
boltar@cylonHQ.com: May 15 02:39PM On Tue, 15 May 2018 16:09:55 +0200 >A common deadlock situation is when you have two locks, A and B, and one >line of execution acquires A then B, the other tries to acquire B then >A. If each gets halfway, then the combined system deadlocks. Yes, except that in multi process you almost never need explicit locking whereas in threaded systems you almost always do. That was my point about coding them in though apparently it whooshed right over your head. >How interesting to hear - my coding is limited to writing data >processing functions, is it? Could it perhaps be that you are mixing up >posters in your eagerness to try to insult people? Well quite possibly, it seems to be interchangable whack-a-mole on here. >> the answer to everything. They're a solution to a small subset of problems. >Threads are not the answer to everything - but they /are/ useful, and a >good solution to many things. The same applies to multiple processes. They're a good solution to some very specific things. >> of, just ISRs. >There is a saying - it is better to remain silent and be thought a fool, >than to open your mouth and prove it. Take your own advice then. >or pre-emptive scheduling, or both) but not multiple processes. And >though I don't have numbers - I doubt if anyone does - I expect these >far outweigh all *nix and Windows systems together, including Android. Well I don't have the numbers either, but I suspect its highly unlikely since these systems would be limited to very niche areas such as aerospace whereas embedded linux is in virtually every embedded area you can think of these days from industrial control systems to routers to as you mentioned, smartphones. >firing off a process to service it. Whether it is more or less >efficient than handling it in the current thread will depend entirely on >the application. If the packet processing is complex and can't realistically be done in a single thread in sufficient time using multiplexing then you'd have a thread or process pool, you certainly wouldn't spawn a new one for each packet. |
David Brown <david.brown@hesbynett.no>: May 15 05:39PM +0200 > Yes, except that in multi process you almost never need explicit locking > whereas in threaded systems you almost always do. That was my point about > coding them in though apparently it whooshed right over your head. If the processes or threads need to communicate, they need to use methods to communicate. These are fundamentally the same for threads and processes. You can do it with locks, mutexes, semaphores, shared memory, message passing, file IO, pipes, and many other ways. Any of these can give you deadlocks, livelocks, races, and other problems if you handle them incorrectly. All of them can work fine if you handle them correctly. The underlying principles are not different. There is, I suppose, a tendency with threading to use lower-level primitives that are a good deal more efficient, but might need more care - like locks or shared memory. Similarly, there may be a tendency in multi-processing setups to use less efficient but easier to understand solutions like pipes. So if you don't really understand how to deal well with communicating tasks in parallel, multi-processing and pipes might be less error-prone than using more efficient methods. Correctness trumps efficiency every time, of course, but the best thing is to learn how to manage parallel tasks correctly, then choose the architecture and communication methods based on clarity, efficiency, scalability, flexibility, etc., rather than ignorance or prejudice. >> processing functions, is it? Could it perhaps be that you are mixing up >> posters in your eagerness to try to insult people? > Well quite possibly, it seems to be interchangable whack-a-mole on here. If you make wild unsubstantiated claims, your are likely to be challenged by more than one person in a technical newsgroup. This is your clue that you are on thin ice. >> Threads are not the answer to everything - but they /are/ useful, and a >> good solution to many things. The same applies to multiple processes. > They're a good solution to some very specific things. They are a good solution to many things, as long as you understand how to use them. If you have a tendency to panic and worry about deadlock when you see them, you will have trouble understanding the point of them. >> There is a saying - it is better to remain silent and be thought a fool, >> than to open your mouth and prove it. > Take your own advice then. I have worked on small embedded systems for 25 years - I know a fair bit about them. (That also includes an understanding of what a small part of a big field any one developer will ever meet.) I don't know /your/ experience, but I'm guessing it is pretty low in this area. > embedded linux is in virtually every embedded area you can think of these > days from industrial control systems to routers to as you mentioned, > smartphones. Your wild guesses don't count for much. Just for one example, a modern car will have maybe 100-300 microcontrollers. About a quarter of these will be running an operating system rather than bare metal code, with maybe two or three running Linux (or, conceivably, embedded Windows) - mainly for navigation and entertainment systems. So your car alone has more non-Linux, non-Windows operating systems than your entire collection of computers, smartphones, etc. Even your smartphone will have several subsystems with their own microcontroller running their own multi-threading OS (the cellular modem and the Wifi module come to mind). It is certainly true that many people would be surprised to see how many places they have embedded Linux systems running. But non-Linux embedded OS's outnumber them /massively/. (Bare metal systems outnumber RTOS systems, but the difference is decreasing.) > If the packet processing is complex and can't realistically be done in a > single thread in sufficient time using multiplexing then you'd have a thread > or process pool, you certainly wouldn't spawn a new one for each packet. Thread and process pools are certainly a common solution, and are usually more efficient than spawning new processes or threads. But efficiency is not always the main concern in finding the best solution. A good developer does not think one architecture is always "right", or another architecture is always "wrong" - you look at what is the best choice for the task in hand. |
Ian Collins <ian-news@hotmail.com>: May 16 08:09AM +1200 >> You don't have to now we have threading as part of the standard library.... > No sane programmer uses threading if multi process can get the job done just > as well. I guess I work in an asylum then! "as well" is a big caveat. If your threads of execution share data, threads are the best option. If they don't and need decoupling, use cooperating processes. > programming mode in Windows which is historical baggage down to the fact that > Windows had to run with the primitive MMUs on the 8086 and 286 and it was > easier to implement threading and CMT than proper pre-emptive multi tasking. I never had cause to work with threads (or much else) on windows before C++11. I did write plenty of threaded code on SunOS 4. -- Ian |
Vir Campestris <vir.campestris@invalid.invalid>: May 15 09:50PM +0100 > programming mode in Windows which is historical baggage down to the fact that > Windows had to run with the primitive MMUs on the 8086 and 286 and it was > easier to implement threading and CMT than proper pre-emptive multi tasking. Yeah, right. That's why threads first turned up (says wikipedia) on an IBM mainframe back in the '60s. And weren't supported by Windows until XP (if I have it right) in about 2001. Whereas they seem to have been introduced to Linux about 5 years earlier. MS were in catchup mode here, you can't blame them if the design is crap. Nor can you blame them if you can't program them correctly - which reading your other posts is perhaps your problem. And the only Windows that ran on that damn brain-damaged '286 (1) wasn't even vaguely related to the Windows we have today. Andy -- (1) Well, it damaged my brain. I had to write a memory test for a '286 PC, which meant jumping in and out of protected mode so I could both access the high memory and use the ROM BIOS. I had a headache every day for a week. |
scott@slp53.sl.home (Scott Lurndal): May 15 10:12PM >> No sane programmer uses threading if multi process can get the job done just >> as well. >I guess I work in an asylum then! Likewise. Threads are heavily used for our work. The problems we're solving can't be easily decomposed into the trivially parallel operations suitable for multiprocess solutions. >> easier to implement threading and CMT than proper pre-emptive multi tasking. >I never had cause to work with threads (or much else) on windows before >C++11. I did write plenty of threaded code on SunOS 4. I've been writing threaded code since 1983 (SMP mainframes, massively parallel processor (MPP) unix systems, Unix/linux servers). SVR4.2MP had an interesting M:N threading model (which IIRC, Solaris also had). |
scott@slp53.sl.home (Scott Lurndal): May 15 10:16PM >back in the '60s. And weren't supported by Windows until XP (if I have >it right) in about 2001. Whereas they seem to have been introduced to >Linux about 5 years earlier. And available in SVR4.2MP internally and shipped in 1993. Digital Unix also had threads in the late 80's. We had multithreaded code (SMP operating system) on the Burroughs mainframes by 1984/5 - complete with mutex and condition variables built into the instruction set. http://vseries.lurndal.org/doku.php?id=instructions:lok |
Ian Collins <ian-news@hotmail.com>: May 16 10:22AM +1200 On 16/05/18 10:12, Scott Lurndal wrote: > SVR4.2MP had an interesting M:N threading model (which IIRC, Solaris > also had). It did, there was much celebration and dancing in the streets when it was removed! -- Ian. |
scott@slp53.sl.home (Scott Lurndal): May 15 11:08PM >> also had). >It did, there was much celebration and dancing in the streets when it >was removed! Well, context switches were pretty expensive back then and avoiding them helped performance, and you could alway configure it as 1:1. I implemented LWP support on a MPP clone of SVR4.2MP and using LWP's directly from user mode was pretty efficient. |
Sky89 <Sky89@sky68.com>: May 15 07:34PM -0400 Hello... I have just corrected the name of my following library, it is now called: Scalable parallel implementation of Conjugate Gradient Linear Sparse System Solver library version 1.68, i have optimized it more and it is now a powerful and great library, i will update soon the Scalable C++ version , here is the Scalable library: https://sites.google.com/site/aminer68/parallel-implementation-of-conjugate-gradient-sparse-linear-system-solver Please stay tuned, the C++ version is coming soon ! Thank you, Amine Moulay Ram,d |
Jorgen Grahn <grahn+nntp@snipabacken.se>: May 15 02:07PM On Sun, 2018-05-13, James Kuyper wrote: > A compiler is free to remove a struct member if it can do so without > preventing the observable behavior of your program from matching one of > the permitted behaviors allowed by the standard for your program. This sounds right. > to be true in order for that to be feasible, but one of them is, for > example, that the observable behavior of your program shouldn't > depend upon the result of applying sizeof to the struct type. This doesn't sound right, if you're saying sizeof has to give the same result as if the optimization wasn't performed. Can you elaborate? As I understand it, there's not a lot you can tell about sizeof(struct foo). For example, surely there's no guarantee that sizeof(struct foo) and sizeof(struct bar) are equal for two identical-looking struct types, or that struct foo is larger than any of its members? Not that I expect to get stellar results from such an optimization; I don't use LTO and as soon as a struct type crosses a translation unit border (like they often do) the optimizer has to obey the ABI. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
James Kuyper <jameskuyper@alumni.caltech.edu>: May 15 10:43AM -0400 On 05/15/2018 10:07 AM, Jorgen Grahn wrote: >> depend upon the result of applying sizeof to the struct type. > This doesn't sound right, if you're saying sizeof has to give the same > result as if the optimization wasn't performed. Can you elaborate? After I posted that question, I realized that I worded the requirement incorrectly - which is an example of why I didn't want to attempt a complete list of the relevant requirements. Strictly conforming code could use a combination of sizeof() and offsetof() to detect the fact that a member has been removed, or use the same combination to write code which won't work correctly if sizeof() and offsetof() fail to report values correctly reflecting the removal of that member. The presence of such code in a program could prevent it from producing observable behavior consistent with the requirements of the C standard if the member were removed. If so, that would prohibit the implementation from performing such a removal. |
anastiaarlingtontaylor@gmail.com: May 14 11:18PM -0700 On Monday, May 19, 2008 at 10:13:01 AM UTC+5:30, Ian Collins wrote: > I'm sure they can, for an appropriate fee. This isn't the place to ask. > -- > Ian Collins. https://groups.google.com/d/topic/alt.usage.english/bTDbmHM-4Ck |
dianejbabin@gmail.com: May 14 10:51PM -0700 On Thursday, April 2, 2009 at 4:02:42 PM UTC+5:30, PGK wrote: > always output a zero/NULL/0? > Thanks, > Paul https://groups.google.com/d/topic/soc.culture.austria/KGNVKafevGY |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment