Saturday, June 22, 2019

Digest for comp.lang.c++@googlegroups.com - 19 updates in 12 topics

G G <gdotone@gmail.com>: Jun 22 09:13AM -0700

could you please direct me to a group where i could ask
general question about operating systems like:
 
i am currently reading Operating Systems Internals and Design Principles
by William Stallings
 
... question ...
 
thanks.
rick.c.hodgin@gmail.com: Jun 22 04:17PM -0700

On Saturday, June 22, 2019 at 12:13:54 PM UTC-4, G G wrote:
> by William Stallings
 
> ... question ...
 
> thanks.
 
Alt.os.development.
 
--
Rick C. Hodgin
Manfred <noname@add.invalid>: Jun 22 07:54PM +0200

On 6/19/2019 12:37 PM, Juha Nieminen wrote:
> of that function is. (It may well be more or less accurate than
> that.) Internally, whatever the system uses will get scaled
> so that the multiplier will be CLOCKS_PER_SEC.
 
In other words, CLOCKS_PER_SEC is more coupled to the implementation of
clock() rather than to properties of the specific CPU etc.
Horizon68 <horizon@horizon.com>: Jun 22 10:08AM -0700

Hello,
 
 
My Parallel C++ Conjugate Gradient Linear System Solver Library that
scales very well was updated to version 1.76
 
 
You can download it from:
 
https://sites.google.com/site/scalable68/scalable-parallel-c-conjugate-gradient-linear-system-solver-library
 
 
Author: Amine Moulay Ramdane
 
Description:
 
This library contains a Parallel implementation of Conjugate Gradient
Dense Linear System Solver library that is NUMA-aware and cache-aware
that scales very well, and it contains also a Parallel implementation of
Conjugate Gradient Sparse Linear System Solver library that is
cache-aware that scales very well.
 
Sparse linear system solvers are ubiquitous in high performance
computing (HPC) and often are the most computational intensive parts in
scientific computing codes. A few of the many applications relying on
sparse linear solvers include fusion energy simulation, space weather
simulation, climate modeling, and environmental modeling, and finite
element method, and large-scale reservoir simulations to enhance oil
recovery by the oil and gas industry.
 
Conjugate Gradient is known to converge to the exact solution in n steps
for a matrix of size n, and was historically first seen as a direct
method because of this. However, after a while people figured out that
it works really well if you just stop the iteration much earlier - often
you will get a very good approximation after much fewer than n steps. In
fact, we can analyze how fast Conjugate gradient converges. The end
result is that Conjugate gradient is used as an iterative method for
large linear systems today.
 
Please download the zip file and read the readme file inside the zip to
know how to use it.
 
Language: GNU C++ and Visual C++ and C++Builder
 
Operating Systems: Windows, Linux, Unix and Mac OS X on (x86)
 
 
 
 
Thank you,
Amine Moulay Ramdane.
Horizon68 <horizon@horizon.com>: Jun 22 09:33AM -0700

Hello...
 
 
About Message Passing Process Communication Model and Shared Memory
Process Communication Model:
 
 
An advantage of shared memory model is that memory communication is
faster as compared to the message passing model on the same machine.
 
However, shared memory model may create problems such as synchronization
and memory protection that need to be addressed.
 
Message passing's major flaw is the inversion of control–it is a moral
equivalent of gotos in un-structured programming (it's about time
somebody said that message passing is considered harmful).
 
Also some research shows that the total effort to write an MPI
application is significantly higher than that required to write a
shared-memory version of it.
 
 
 
Thank you,
Amine Moulay Ramdane.
Szyk Cech <szykcech@spoko.pl>: Jun 22 05:28AM +0200

W dniu 21.06.2019 o 23:13, Mr Flibble pisze:
> (humans evolved, fact)
 
Monkeys do not evolved to humans! Humans are not similar to any monkey
(and vice-versa). Humans were created by Gods (like any other species).
Human and monkeys have different races which are mixable - but this is
not evolution - this is variety inside species.
Show me some monkey which apear more like human and which do something
like human (especialy perform dialy complicated tasks) and which build
civilization.
gazelle@shell.xmission.com (Kenny McCormack): Jun 22 02:03PM

In article <yfhPE.4284$CV5.558@fx02.fr7>, Szyk Cech <szykcech@spoko.pl> wrote:
...
>Show me some monkey which apear more like human and which do something
>like human
 
George W. Bush?
 
--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/Rorschach
Tim Rentsch <tr.17687@z991.linuxsc.com>: Jun 22 04:14AM -0700

> such worries, but for example the current x86-64 actually supports
> only 48-bit virtual address space, which is only 65,000 times more -
> not so infinite as it first feels.
 
Point taken.
 
I still think 1MB of stack space is on the low side, by at least
a factor of 2, even for 32-bit environments. For 64-bit
environments, the default could be 1GB, and even 2500 threads
would use only 1% of the (usable) address space.
David Brown <david.brown@hesbynett.no>: Jun 22 02:27PM +0200

On 22/06/2019 13:14, Tim Rentsch wrote:
> a factor of 2, even for 32-bit environments. For 64-bit
> environments, the default could be 1GB, and even 2500 threads
> would use only 1% of the (usable) address space.
 
There is a cost in allocating virtual space to a stack, even if it is
not used (and therefore never mapped to physical memory). Since very
few programs ever need anything like 1 MB of stack space, it is not a
cost worth paying. It would make more sense to have a smaller stack
space by default (1 MB is probably much bigger than needed, especially
for threads other than the main thread) and automatically grow the stack
when needed. (I don't know if this is done at the moment - for the
small amount of PC programming I have done with C and C++, stack space
is not an issue.)
Tim Rentsch <tr.17687@z991.linuxsc.com>: Jun 22 06:09AM -0700

Tiib writes:
 
 
> Result is that code like "delete new int;" won't be optimized out.
> For same reason usage of std::string won't be optimized out where
> usage of std::string_view will be.
 
This has been an interesting thread. Out of curiosity I tried
some examples. Here are two, with some amusing results:
 
#include <iostream>
 
int
one( int x ){
{ std::string s; }
return x ? one( x-1 ) : 7;
}
 
int
two( int x ){
delete new int;
return x ? two( x-1 ) : 5;
}
 
Compiling with clang (-O3), function two() compiles to just a
straight return of the value 5, and function one() compiles to
several instructions including a conditional branch and a
recursive call (bad! bad compiler!). So the 'delete new int;'
got optimized away, but the unused variable didn't.
 
Compiling with g++ (-O3), function one() compiles to just a
straight return of the value 7, and function two() compiles to a
loop with an allocation and deallocation each time around before
eventually always returning the value 5. So the unused variable
got optmized away, but the 'delete new int;' didn't (which didn't
prevent the tail call being optimized).
Tim Rentsch <tr.17687@z991.linuxsc.com>: Jun 22 05:13AM -0700

> the two representable values immediately preceding and immediately
> following that value, and it does not require that the choice be
> consistent.
 
I don't think this is right. The people who did IEEE floating
point are keen on reproducibility, and it seems unlikely they
would allow this much slop in the results. Furthermore IEEE 754
spells out in detail several different rounding modes, and AFAICT
implementations of IEEE 754 are required to support all of them.
Supporting statements can be found in ISO C N1570, for example
footnote 204 in 7.6 p1
 
This header is designed to support the floating-point
exception status flags and directed-rounding control modes
required by IEC 60559 [...]
 
There would be little point to requiring, for example, both
round-to-nearest and round-towards-zero, if they could produce
identical results.
 
All the evidence I have seen suggests that the IEEE-specified
directed-rouding modes are both exact and required. I don't have
any citations from the requisite ISO document (as I have not got
a copy from which to give one), but if it were to allow the sort
of inexact results described then there should be some indication
of that amongst the sea of publically available web documents,
and I haven't found any.
Tim Rentsch <tr.17687@z991.linuxsc.com>: Jun 22 04:30AM -0700

>> p[2] = mul; /* address of mul() */
>> p[3] = div; /* address of div() */
>> [...]
 
Normally these initializations would be done as part of the
declaration:
 
int (*p[4])( int x, int y ) = { sum, subtract, mul, div };
 
>> To call one of those function pointers:
 
>> result = (*p[op]) (i, j); // op being the index of one of the four functions
 
Or just
 
result = p[op]( i, j );
 
 
> int main() {
> // ...
> }
 
Since none of the lambdas capture any environment, this can be
shortened to:
 
int (*p[])( int, int ) = {
[](int x, int y) {return x + y; },
[](int x, int y) {return x - y; },
[](int x, int y) {return x * y; },
[](int x, int y) {return x / y; },
};
 
int main() {
// ...
}
Rosario19 <Ros@invalid.invalid>: Jun 22 09:30AM +0200

On Fri, 21 Jun 2019 09:09:37 +1200, Ian Collins wrote:
 
 
>> I feel this statement you've made needs addressing.
 
>It does, by you and your kind accepting that your beliefs are just that,
>beliefs, not fact.
 
for the one has Faith, are facts as are facts what says the Bible...
 
if one goes agains commandaments of Bibble, or prescritions (even in
homosexuality subjects, so one has not to be homosexual) goes to hell
 
we not know enought the creation for object something, or wrost change
comandaments or laws
 
if someone not believe i can not impose anything but there is the
chances that Bible has right
 
>thin veneer of religion. You have no clue about the harm your kind's
>baseless drivel can have on young people trying to come to terms with
>who they are. I do.
 
this is not hate, this is one alarm bell for show the errors
one has to say only thank you for that
 
David Brown <david.brown@hesbynett.no>: Jun 22 12:56PM +0200

On 21/06/2019 22:31, Keith Thompson wrote:
>> all alternatives.
 
> How is it ineffectual? I don't see Rick's posts, which makes
> comp.lang.c a much more pleasant place for me.
 
I meant ineffectual for the group. Yes, killfiles work for the the
individual using them (they work far better when combined with "kill
thread" or "kill sub-thread" features). But they are totally
ineffective at keeping bad posts, or bad posters, out of the group.
 
First off, you need an overwhelming majority of other posters to ignore
the unwanted poster or posts. As long as there is /one/ muppet in the
group who says "I am looking forward to CAlive", the CAlive posts will
continue. Even when all replies to the bizarre pseudo-science posts are
all against the poster, it does not stop him.
 
And even if /everybody/ ignored such posters entirely, they would still
keep posting. Pretty much everyone has always ignored "Amine"
(currently posting as "Horizon68", yet he has single-handedly destroyed
several Usenet groups.
 
Ignoring bad posters or posts simply does not work. It is often the
least bad option, but it does not work. Nothing short of a
straightjacket will stop certain people - and fortunately that is not
within our power.
 
Of course, that does not mean there is anything wrong with using
killfiles to make c.l.c. and c.l.c++ more pleasant for yourself. Just
remember to add "kill sub-thread" options to it, to avoid seeing replies
to posts you want to skip.
Tim Rentsch <tr.17687@z991.linuxsc.com>: Jun 22 03:34AM -0700


> but then too isn't it, well for the situation you gave, a plus in
> C++, cause of objects, data members and member functions could be
> made private.
 
My point was only about the properties of omitted tags,
nothing more.
rick.c.hodgin@gmail.com: Jun 21 04:26PM -0700

> here in this Usenet group.
 
> I will be praying for about an hour. I will also fast that day be-
> fore I pray.
 
 
If you are reading this, you are being being prayed for now.
 
--
Rick C. Hodgin
rick.c.hodgin@gmail.com: Jun 21 06:39PM -0700


> > I will be praying for about an hour. I will also fast that day be-
> > fore I pray.
 
> If you are reading this, you are being being prayed for now.
 
 
If you are reading this, you are one of 298 names I prayed for
over the past two hours.
 
May the Lord guide you and keep you always safe and secure.
 
--
Rick C. Hodgin
Horizon68 <horizon@horizon.com>: Jun 21 03:28PM -0700

Hello...
 
 
About composability of lock-based systems..
 
 
Design your systems to be composable. Among the more galling claims of
the detractors of lock-based systems is the notion that they are somehow
uncomposable: "Locks and condition variables do not support modular
programming," reads one typically brazen claim, "building large programs
by gluing together smaller programs[:] locks make this impossible."9 The
claim, of course, is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and operating
systems into larger systems that remain entirely unaware of lower-level
locking.
 
There are two ways to make lock-based systems completely composable, and
each has its own place. First (and most obviously), one can make locking
entirely internal to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel locks held;
the locks used to implement the system itself are entirely behind the
system call interface that constitutes the interface to the system. More
generally, this model can work whenever a crisp interface exists between
software components: as long as control flow is never returned to the
caller with locks held, the subsystem will remain composable.
 
Second (and perhaps counterintuitively), one can achieve concurrency and
composability by having no locks whatsoever. In this case, there must be
no global subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the subsystem to
assure that they do not access their instance in parallel. By leaving
locking up to the client of the subsystem, the subsystem itself can be
used concurrently by different subsystems and in different contexts. A
concrete example of this is the AVL tree implementation used extensively
in the Solaris kernel. As with any balanced binary tree, the
implementation is sufficiently complex to merit componentization, but by
not having any global state, the implementation may be used concurrently
by disjoint subsystems—the only constraint is that manipulation of a
single AVL tree instance must be serialized.
 
Read more here:
 
https://queue.acm.org/detail.cfm?id=1454462
 
 
 
Thank you,
Amine Moulay Ramdane.
"Chris M. Thomasson" <invalid_chris_thomasson_invalid@invalid.com>: Jun 21 03:37PM -0700

On 6/21/2019 3:28 PM, Horizon68 wrote:
> Hello...
 
> About composability of lock-based systems..
 
Fwiw, using an older thing wrt threads created sorted arrays of indices
into a larger static mutex pool might be of some sort of service.
 
https://groups.google.com/forum/#!original/comp.arch/QVl3c9vVDj0/nJy6qu-RAAAJ
 
This can be tweaked. A thread would create an array of local indexes to
be locked by hashing the addresses of objects into the global lock
array. Sorted them, and removed all duplicates. Then took each sorted
unique lock in order. Simple. :^)
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: