Wednesday, May 16, 2018

Digest for comp.lang.c++@googlegroups.com - 14 updates in 3 topics

Sky89 <Sky89@sky68.com>: May 16 03:49PM -0400

Hello...
 
 
Here is how to use my Delphi projects with C++Builder:
 
Mixing Delphi and C++ With David Millington:
 
https://www.youtube.com/watch?v=6f5UBL0bQ9U
 
 
You can find all my Delphi projects here:
 
https://sites.google.com/site/aminer68/
 
 
 
Thank you,
Amine Moulay Ramdane.
Sky89 <Sky89@sky68.com>: May 16 07:26PM -0400

Hello..
 
Scalable Parallel C++ Conjugate Gradient Linear System Solver
Library for Windows and Linux was updated to version 1.70,
it was more optimized, and this version is much more stable
and scalable and very fast:
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Scalable Parallel implementation of
Conjugate Gradient Dense Linear System Solver library that is
NUMA-aware and cache-aware, and it contains also a Scalable
Parallel implementation of Conjugate Gradient Sparse Linear
System Solver library that is cache-aware.
 
Conjugate Gradient is known to converge to the exact solution in n steps
for a matrix of size n, and was historically first seen as a direct
method because of this. However, after a while people figured out that
it works really well if you just stop the iteration much earlier - often
you will get a very good approximation after much fewer than n steps. In
fact, we can analyze how fast Conjugate gradient converges. The end
result is that Conjugate gradient is used as an iterative method for
large linear systems today.
 
 
You can download it from:
 
https://sites.google.com/site/aminer68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
Please download the zip file and read the readme file inside the
zip to know how to use it.
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
 
 
Thank you,
Amine Moulay Ramdane.
scott@slp53.sl.home (Scott Lurndal): May 16 02:07PM


>Ah, well, that was before my time. I didn't start programming until
>about 1980, and as an eight-year-old I didn't have much access to Unix
>systems :-(
 
1974 for me (at thirteen) Basic on a B5500, first assembler on PDP-8 in
1976, first high-level systems language (SPL) on an HP-3000 in
1977 and first C on UnixV6 in 1979.
boltar@cylonHQ.com: May 16 03:03PM

On Wed, 16 May 2018 15:56:14 +0200
 
>But it is easy to deadlock processes using queues - if process P1 waits
>for data coming in on Q1 before sending out on Q2, and process P2 waits
>for data coming in on Q2 before sending out on Q1. Queues (pipes,
 
Except you'd (almost) never have a process sitting in a read() waiting for
input or stuck in a write() waiting to output, the pipe or socket descriptor
would be multiplexed in select() or poll().
 
>> the early 90s I may be prejudiced but I am certainly not ignorant on the
>topic.
 
>Your high prejudice against threads suggests you are ignorant, or at
 
No, I spent enough time bug fixing huge threaded applications with endless
race conditions that were often unrepeatable and the occasional deadlock to
come to the conclusion that threads are more trouble that they're worth unless
each thread generally keeps to itself and/or doesn't make many calls that could
cause issues.
 
>> which ran discrete programs, not OS's.
 
>You make that sound like something from long ago. Small RTOS's were
 
Mid 90s.
 
>> As the rapid rise of inefficient bloatware libraries that cater to people
>> with limited ability demonstrates.
 
>Tell me, do you take your car when you go to the shops? Isn't it a bit
 
No. I live 300m away from them.
 
>It's the same in development. Sometimes the emphasis should be on
>developer time, sometimes it should be on processor run time. Usually,
>it is somewhere in between.
 
True, but the trend seems to be increasingly on speed of development, agile
etc, rather than efficiency and quality of code. The pendulum has swung way
to far in favour of lego brick coding which is why we have multi core CPUs on
multi CPU machines being brought to their knees running applications that
could happily run on the hardware of 20 years ago.
boltar@cylonHQ.com: May 16 03:16PM

On Wed, 16 May 2018 13:57:09 GMT
>of unix within AT&T (PWB/Unix, IIRC).
 
>However, there are now posix versions of both that fit better into
>the unix namespace philosophy.
 
To be fair I'd forgotten about them. Yes, they're more powerful with a saner
interface but the message queues require a non default filesystem mount in
linux.
 
>>can read or not read at your leisure. Not reading them has no adverse effects
>>upon the application wrt to blocking etc.
 
>Writes to pipes and fifos may block if the reader doesn't read.
 
Thats why the select() function has a write mask for you to check if its
writable first. Its probably not infallable but since most pipe interactions
will be synchronous its highly unlikely one program will fill up the pipe
buffer unless the reader has hung and in that case it won't matter anyway.
boltar@cylonHQ.com: May 16 08:50AM

On Tue, 15 May 2018 21:50:33 +0100
 
>Yeah, right.
 
>That's why threads first turned up (says wikipedia) on an IBM mainframe
>back in the '60s. And weren't supported by Windows until XP (if I have
 
Really? How do you think all the various GUI events were run in the background
then while your application ran its main thread? Or do you remember having
to - for example - explicitely program the cursor to flash in a text box?
Robert Wessel <robertwessel2@yahoo.com>: May 16 04:00AM -0500

On Wed, 16 May 2018 10:27:13 +0200, David Brown
 
>So Linux certainly /was/ around at that time, and already popular in
>some environments (like HPC), but the "Linux thread wars" were a little
>later.
 
 
No the debate I was referring to preceded that. This was before
widespread implementation of threads in Unix, and mostly (completely?)
before Linux. IIRC, from the mid eighties to the early nineties, give
or take. A large portion of the Unix community took the position that
"threads are evil, processes are all you need". Suggesting that Unix
*should* have threads often provoked a rude response.
 
There were certainly exceptions, mainly from *nix vendors who made big
(MP) machines.
David Brown <david.brown@hesbynett.no>: May 16 03:56PM +0200


> Well they are, because pipes and sockets are buffered interfaces that you
> can read or not read at your leisure. Not reading them has no adverse effects
> upon the application wrt to blocking etc.
 
Of course it does.
 
It is certainly conceptually easier to work with queues of various sorts
when synchronising data between parallel code streams. One of the
important uses of locks and mutexes is to implement queues, fifos,
message buffers, etc.
 
But it is easy to deadlock processes using queues - if process P1 waits
for data coming in on Q1 before sending out on Q2, and process P2 waits
for data coming in on Q2 before sending out on Q1. Queues (pipes,
sockets, whatever) can be used to simulate locks and mutexes, and
therefore can be used to get the same deadlocking. Pretty much /any/
synchronisation method can be used to implement any other one - they are
therefore equivalent in terms of the things you can do with them,
including the mistakes you can make. But they differ significantly in
terms of efficiency for various tasks, convenience, and the ease of
getting particular tasks right, and the difficulty of getting particular
tasks wrong.
 
So you /can/ deadlock with pipes - you are just a good deal less likely
to do it accidentally. This is also why message passing and queues are
often a good way to handle inter-thread communication, not just
inter-process communication.
 
 
>> scalability, flexibility, etc., rather than ignorance or prejudice.
 
> Given that I've been writing system and application software for unix since
> the early 90s I may be prejudiced but I am certainly not ignorant on the topic.
 
Your high prejudice against threads suggests you are ignorant, or at
least inexperienced, in using them. Of course there is sense in using
what you know - if a particular architecture works for you and your
needs, then there may not be any point in looking at other solutions.
Just don't make the mistake of thinking that your way is the /only/ way,
or that everyone else is wrong.
 
>> experience, but I'm guessing it is pretty low in this area.
 
> I worked on some ticketing machines once which had multiple CPUs, all of
> which ran discrete programs, not OS's.
 
You make that sound like something from long ago. Small RTOS's were
less used in older times - they need a bit more processing power and
memory than you used to get with 8051 and similar devices. But as you
can get ARM core microcontrollers for $0.30, developers have the freedom
to use them for flexibility.
 
 
>> Your wild guesses don't count for much.
 
> Well since there's no way of proving it one way or the other there's little
> point arguing the toss about it.
 
I invite you to google a bit. You probably won't get accurate numbers
(I couldn't find any), but you will see enough to get an idea of the trends.
 
>> efficiency is not always the main concern in finding the best solution.
 
> As the rapid rise of inefficient bloatware libraries that cater to people
> with limited ability demonstrates.
 
Tell me, do you take your car when you go to the shops? Isn't it a bit
inefficient to move a couple of tons of metal when all you really want
to do is move a few pints of milk and a loaf of bread? You do it
because it is far more convenient to the user. When you want to move 20
tons of milk to deliver to the shop, on the other hand, you use a
different ratio for better efficiency.
 
It's the same in development. Sometimes the emphasis should be on
developer time, sometimes it should be on processor run time. Usually,
it is somewhere in between.
 
>> another architecture is always "wrong" - you look at what is the best
>> choice for the task in hand.
 
> Can't disagree there.
 
Ah, agreement at last!
scott@slp53.sl.home (Scott Lurndal): May 16 01:57PM


>Shared memory certainly has its uses, but the rest of Sys V IPC is rarely
>used in my experience any more. It was always clunky and didn't play well
>with the rest of the posix subsystems.
 
Consider the history. shmat/semops were introduced from a branch
of unix within AT&T (PWB/Unix, IIRC).
 
However, there are now posix versions of both that fit better into
the unix namespace philosophy.
 
 
>Well they are, because pipes and sockets are buffered interfaces that you
>can read or not read at your leisure. Not reading them has no adverse effects
>upon the application wrt to blocking etc.
 
Writes to pipes and fifos may block if the reader doesn't read.
scott@slp53.sl.home (Scott Lurndal): May 16 01:59PM


>Using non-blocking IO means polling which is either a resource hog or
>causes slower response times. So it is suited only for some special
>situations.
 
One can also use lio_listio, aio_read, aio_write, et alia. I used
lio_listio extensively (with list sizes exceeding 100 buffers at times)
in a port of Oracle to an MPP machine.
David Brown <david.brown@hesbynett.no>: May 16 03:59PM +0200

> other. God knows what OS it ran but it allowed us to program Basic on the
> terminals. Had a habit of crashing quite often IIRC so the teacher used to
> sit next to the machine ready to press the reset button.
 
The first time one of my schools got a computer, I was in charge of it -
aged 12. I knew far more about it than any of the teachers.
scott@slp53.sl.home (Scott Lurndal): May 16 02:03PM

>widespread implementation of threads in Unix, and mostly (completely?)
>before Linux. IIRC, from the mid eighties to the early nineties, give
>or take.
 
Digital Unix and SVR4/MP both had threading models
in the early 80's/90's. About the same time, POSIX 1003.4
was being developed (called realtime extensions, which included pthreads).
 
> A large portion of the Unix community took the position that
>"threads are evil, processes are all you need".
 
I was there, deep in the standards process in the early
90's and what you say does not ring true. There may have
been such voices, but they did not encompass a "large portion of the
unix community".
 
 
>*should* have threads often provoked a rude response.
 
>There were certainly exceptions, mainly from *nix vendors who made big
>(MP) machines.
 
Well, that was me :-).
Paavo Helde <myfirstname@osa.pri.ee>: May 16 09:07PM +0300


> Except you'd (almost) never have a process sitting in a read() waiting for
> input or stuck in a write() waiting to output, the pipe or socket descriptor
> would be multiplexed in select() or poll().
 
So the process would sit in select() indefinitely. This would be the
same deadlock. But to be fair, I personally have not seen a deadlock
with queues, either in-process mutex-protected ones or inter-process
pipes. Probably this is because with queues one typically has pretty
well-defined producers and consumers which helps to avoid circular
dependencies.
 
Deadlocks appear if threads or processes lock multiple data structures
at the same time in a different order. For different processes this
means to lock *external* data structures, like flock() on files. Threads
in the same process can lock also internal data structures in memory and
as this is easier and done with less thought there are more chances to
get it wrong. So I kind of agree with you that multi-process approach
would be less prone to deadlocks in general (at the expense of
duplicating all mutable data structures and needing more tedious ways to
keep them synchronized).
 
> come to the conclusion that threads are more trouble that they're worth unless
> each thread generally keeps to itself and/or doesn't make many calls that could
> cause issues.
 
C++ is certainly a language where one can mess up in horrific ways and
lots of things which can be done should be not done. That's nothing new,
except that multithreading bugs are often harder to catch and fix.
 
> to far in favour of lego brick coding which is why we have multi core CPUs on
> multi CPU machines being brought to their knees running applications that
> could happily run on the hardware of 20 years ago.
 
The hardware today costs less then the hardware of 20 years ago, so this
is pretty much expected. The software expands to fill all the available
memory and cores, that's just economics. And to fill all the cores the
current solution is to be multi-threaded. Lots of C++ code is in
libraries running in arbitrary processes, and it would not be safe to
fork the process without immediate exec anyway, as soon as the original
process might ever have more than one threads.
 
So, multithreading is the current way to go, like it or not. To keep it
manageable it is indeed a good idea to have very independent threads and
only use message queues for communicating with them. This basically
reproduces the multi-process pipe solution in a single process, a bit
more efficiently and a bit more prone to errors.
Ian Collins <ian-news@hotmail.com>: May 16 10:26PM +1200

On 16/05/18 21:00, Robert Wessel wrote:
> or take. A large portion of the Unix community took the position that
> "threads are evil, processes are all you need". Suggesting that Unix
> *should* have threads often provoked a rude response.
 
Indeed! Some of the first C++ I wrote (early 90s) was a "thread"
wrapper library around SunOS LWPs. It provided a better platform for
modeling our embedded RTOS than earlier attempts using processes. That
was probably where I got the thread bug.
 
--
Ian.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: