Friday, September 13, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Soviet_Mario <SovietMario@CCCP.MIR>: Sep 13 07:10PM +0200

I am trying to fix some concept about "ownership" of
dynamically allocated memory (in the C-style libraries, on
linux world)
 
 
As I'm reading some man about SCANDIR (a high level
directory scanning function which in "one shot" takes care
to return everything hiding loops, recursion and all).
 
It says it internally allocates (using MALLOC) an array of
pointers to <dirent> structures, or better the array itself
and maybe the pointers contiguously (I'm not sure because I
see the dirent entry preallocates a static buffer for names
MAXPATH wide, so in principle it is made up of chunks all
the same size that, always in principle, can be malloc-ed in
a contiguous block and not necessarily as a sparse jagged
array).
 
Man page recommends explicitely to free the mem after use as
SCANDIR no longer cares about.
 
Now I'd like to understand how the Linux memory manager
manage memory allocated and never released at program
TERMINATION.
 
I dare to have some hope that memory as some ownership
allocated with it (if it was not, how to throw segfaults ?).
I also dare to think that when the central memory manager is
notified a process and all its children terminates, it can
release all the associated resources (file handles, dynamic
memory, pipes, sockets, everything).
 
Is this assumption ...
1) generally or always false
2) always true (the OS knows and provides post-mortem)
3) generally but not always true (in this cases, when it
holds and when not ?)
 
I'm also rather insecure about actual ownership in case
MALLOC is called not in code itself but, like SCANDIR case,
in the LIBRARY. The ownership is the same or not ?
 
The doubt arises from a possible case of SHARED libraries,
maybe called by different programs, that actually DO NOT
terminate after the said program does. So, if the memory is
malloc-ed with the "signature" of the library, no one knows
which particular client was useful to, and it could not be
safely freed on relevant client termination.
 
If instead the memory is somewhat attributed to the caller
and not to the library itself, it can be safely freed.
 
Obviously, I will try to FREE manually, but I'd like to
understand better resource management
tY
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
scott@slp53.sl.home (Scott Lurndal): Sep 13 05:33PM


>Now I'd like to understand how the Linux memory manager
>manage memory allocated and never released at program
>TERMINATION.
 
Unprivileged applications cannot allocate memory that survives
application (process) termination.
 
The recommendation should, however, be followed to avoid
unexpected /unplanned growth in the memory requirements of the
application.
 
Personally, I'll almost always use nftw(3) in such cases, as
scandir can require a considerable amount of memory when the
directory heirarchy is both deep and wide.
Soviet_Mario <SovietMario@CCCP.MIR>: Sep 13 07:55PM +0200

Il 13/09/19 19:33, Scott Lurndal ha scritto:
>> TERMINATION.
 
> Unprivileged applications cannot allocate memory that survives
> application (process) termination.
 
yes, but memory allocated by a LIBRARY is attributed to the
client program or to the library itself ?
The library can surely be unprivileged (and its ram freed on
unloading), but can have an unpredictable lifetime and thus
survive "the" caller.
 
In this case how is the memory managed ?
 
 
> The recommendation should, however, be followed to avoid
> unexpected /unplanned growth in the memory requirements of the
> application.
 
yes, sure : in known contexts I try to avoid leaks. In fact
I'm trying to better understand possible origins of the leaks.
 
 
> Personally, I'll almost always use nftw(3) in such cases, as
> scandir can require a considerable amount of memory when the
> directory heirarchy is both deep and wide.
 
yes, the tree is heavy. But I'm willing to delegate robust
library function as much as possible. I also hope that it
would be fast even if consuming a lot.
 
And anyway I will need to store such information in another
formats no matter what. So at a given worst moment even a
second copy will be required before malloc-ed dirent array
can be freed :\
 
 
--
1) Resistere, resistere, resistere.
2) Se tutti pagano le tasse, le tasse le pagano tutti
Soviet_Mario - (aka Gatto_Vizzato)
Keith Thompson <kst-u@mib.org>: Sep 13 11:29AM -0700

>> application (process) termination.
 
> yes, but memory allocated by a LIBRARY is attributed to the
> client program or to the library itself ?
 
It's attributed to the process.
 
C++ itself (or C) doesn't say much about this, but the fact that a
library function can call malloc() and then your program can call
free() to release the allocated memory implies that it's all part
of the same pool. malloc() itself is part of the standard library.
 
Typically if two running programs are both using the same library,
they might share the code that implements the functions in that
library (if it's a shared library -- *.so or *.dll), but not
any memory for objects created via that library. For example,
two simultaneously running processes using a library, even if
they're instances of the same program, generally can't see each
other's memory. Local data in a library function is allocated on
the calling process's stack.
 
Again, this is more about the OS than about anything specified
by the language, which doesn't even guarantee that you can run
more than one program simultaneously. Any reasonable OS (other
than for small embedded systems) creates a process when you run a
program, protects processes from each other, and cleans up resources
(particularly memory) when a process terminates.
 
[...]
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */
scott@slp53.sl.home (Scott Lurndal): Sep 13 06:36PM

>The library can surely be unprivileged (and its ram freed on
>unloading), but can have an unpredictable lifetime and thus
>survive "the" caller.
 
Keith has addressed this.
 
 
>formats no matter what. So at a given worst moment even a
>second copy will be required before malloc-ed dirent array
>can be freed :\
 
If you need to store it in other formats, you might consider using
nftw(3) and you can store the data once, rather than copying what
scandir returns.
Manfred <noname@add.invalid>: Sep 13 05:49PM +0200

On 8/30/2019 2:14 AM, James Kuyper wrote:
> extract the map directly, rather than by calling toMap(). So why did he
> use QVariant instead? I'm getting the impression that he used QVariant
> without bothering to consider whether it was actually needed.
 
An abstract reason (i.e. with no sight of the code) is to distinguish
between an empty message and a non existing message:
You know that if you call messageMap["foo"] and the is no "foo" element,
the map creates a default object for you (at least std::map does, I
guess QtMap does it too)
If the value is a variant you get a default variant (which obviously
isn't a map), and if the value is a map you get an empty map, i.e. an
empty message.
 
Just guessing, as said I have no idea if this might even remotely be the
case.
(BTW of course if I had to try and access an element without
constructing one find() would be the right thing to do)
 
A similar argument would apply if you chose to use a vector: if you have
a vector of variants you could count how many fragments have been
received by counting how many elements contain strings, instead of using
empty strings for non-received elements, which could be wrong depending
on the transmission protocol.
 
Disclaimer: even if any of the above could explain what happened with
the code, there is no way to say if it could be in any way a good choice
without more info on the code and its purpose.
From what you have posted it seems to be an overcomplicated solution
for a relatively easy problem anyway, so caution is in order.
peteolcott <Here@Home>: Sep 12 06:53PM -0500

On 9/12/2019 5:07 PM, Juha Nieminen wrote:
> infinitesimals to your number system. It doesn't matter how you try
> to twist it, it's not going to work. You are not going to make that
> number exist. Not with infinitesimals, not with anything.
 
There is a contiguous set of points between (3.0, 4.0].
There is a first point in that interval, let's call it X.
 
Every point on the number line represents some number therefore the
first point X in the above interval represents a number, it might not
be a real number, or any other conventional named type of number.
X is a number on the number line.
 
 
--
Copyright 2019 Pete Olcott All rights reserved
 
"Great spirits have always encountered violent
opposition from mediocre minds." Albert Einstein
Keith Thompson <kst-u@mib.org>: Sep 12 05:47PM -0700

peteolcott <Here@Home> writes:
[SNIP]
 
Pete, will you please consider *not* posting this to comp.lang.c++
and other newsgroups where it's off-topic? You've already taken
over comp.theory, and anyone who's interested can follow you there.
(I've cross-posted and redirected followups to comp.theory.)
 
Note to other readers: This is my first *and last* attempt to ask
Pete not to post to off-topic newsgroups.
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */
James Kuyper <jameskuyper@alumni.caltech.edu>: Sep 12 07:00PM -0700

On Thursday, September 12, 2019 at 7:54:09 PM UTC-4, peteolcott wrote:
...
> There is a contiguous set of points between (3.0, 4.0].
> There is a first point in that interval, let's call it X.
 
The concept that you can identify a first point in that interval
inherently leads to a contradiction. If X is in that interval, then
3.0 < X. Therefore,(3.0+X)/2.0 has a value that is manifestly greater
than 3.0, so it should be in that interval, but is manifestly less than
X, so it should occur earlier in that interval than X.
Unless and until you say something to address that argument, I'll pay
no further attention to you - I've already paid more attention than I
should.
Juha Nieminen <nospam@thanks.invalid>: Sep 13 08:14AM

> There is a contiguous set of points between (3.0, 4.0].
> There is a first point in that interval, let's call it X.
 
No, there isn't. Such a point doesn't exist.
 
Learn some number theory, will you?
peteolcott <Here@Home>: Sep 13 09:35AM -0500

On 9/13/2019 3:14 AM, Juha Nieminen wrote:
>> There is a first point in that interval, let's call it X.
 
> No, there isn't. Such a point doesn't exist.
 
> Learn some number theory, will you?
 
APPARENTLY I AM RIGHT AND YOU ARE WRONG!!!
 
https://www.encyclopediaofmath.org/index.php/Interval_and_segment
 
Interval and segment
An interval (open interval) is a set of points on a line lying
between two fixed points and , where and themselves are considered
not to belong to the interval.
 
THERE IS A FIRST POINT OF THE ABOVE INTERVAL.
THERE MAY NOT BE ANY CONVENTIONALLY NAMED NUMBER TYPE ASSOCIATED WITH THIS POINT.
 
The fact that [0.0, 1.0) is exactly one geometric point shorter
than [0.0, 1.0] proves that infinitesimal numbers do exist.
 
--
Copyright 2019 Pete Olcott All rights reserved
 
"Great spirits have always encountered violent
opposition from mediocre minds." Albert Einstein
Mr Flibble <flibble@i42.removethisbit.co.uk>: Sep 13 03:55PM +0100

On 13/09/2019 15:35, peteolcott wrote:
> The fact that [0.0, 1.0) is exactly one geometric point shorter
> than [0.0, 1.0] proves that infinitesimal numbers do exist.
 
Nonsense, there is always a number smaller than an "infinitesimal
number" ergo there is no "smallest" number ergo "infinitesimal number"
is a nonsense concept.
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens."
– Ricky Gervais
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates
a world that is so full of injustice and pain. That's what I would say."
woodbrian77@gmail.com: Sep 12 05:53PM -0700

On Thursday, September 12, 2019 at 1:22:42 PM UTC-5, Szyk Cech wrote:
 
> ::std::vector<int32_t> vec {100,97,94,91,88,85};
> marshal<messageID::id1>(buf,vec,"Proverbs 24:27");
 
> before?
 
Yes, but it's just an example.
 
 
> I ask to make sure that it is supposed to talk only with specific server.
> My final question is:
> Can't I receive and parse data from any binary protocol?
 
Well, there's only support for message lengths of 0 or 4 bytes at
this time. G-d willing, we'll make that more flexible in the future.
Message Ids can be 1,2 or 4 bytes.
 
 
Brian
Ian Collins <ian-news@hotmail.com>: Sep 13 03:31PM +1200

On 13/09/2019 08:03, Scott Lurndal wrote:
 
>> https://duckduckgo.com/?q=mark+cppnow+vmware&t=h_&ia=videos&iax=videos&iai=vGV5u1nxqd8
 
> If you can't explain it in your own words, fine. I've no interest
> in watching some random video
The video is interesting and probably (it's a presentation by Mark Zeren
from VmWare) relevant to your work. He does make a strong case for
measuring executable size as a measure of code complexity. Saying that,
it has no relevance to comparative compiler code sizes...
 
--
Ian.
"Öö Tiib" <ootiib@hot.ee>: Sep 12 11:28PM -0700


> > Why should anyone want to provide service that compiles your crap
> > and runs your unit tests on each commit for free?
 
> It's not crap.
 
Fine, that is orthogonal issue anyway.
 
> Services are often free in order to garner users. Often
> though they are only free for the first 30 days or so.
 
It is because continuous integration is not something that
we "use". It automates tedious tasks in development process.
On ideal case fully. Set up and forget. So there are no much
opportunities even to show ads to users of good CI.
David Brown <david.brown@hesbynett.no>: Sep 13 09:29AM +0200

On 13/09/2019 05:31, Ian Collins wrote:
> from VmWare) relevant to your work.  He does make a strong case for
> measuring executable size as a measure of code complexity.  Saying that,
> it has no relevance to comparative compiler code sizes...
 
Scott did not say the video was not interesting or relevant - he said he
had no interest in watching an unknown video link. Remember who posted
that link - /I/ certainly would not bother clicking on it, given Brian's
history.
 
What Brian should have done is answered Scott's question:
 
"""
I think executable size is important because it gives better I-cache hit
rates, or because it makes coverage testing easier, or because my
customers have dial-up modems, etc. This video by famous person Mark
Zeren from VmWare explains more:
 
<https://www.youtube.com/watch?v=vGV5u1nxqd8>
"""
 
(Note the link is the proper link, not an advert for Brian's favourite
search engine.)
 
 
Personally, I still wouldn't watch the video, because I don't find video
to be a useful medium for that kind of thing. But that's just me -
other people like them.
Ian Collins <ian-news@hotmail.com>: Sep 13 07:45PM +1200

On 13/09/2019 19:29, David Brown wrote:
>> it has no relevance to comparative compiler code sizes...
 
> Scott did not say the video was not interesting or relevant - he said he
> had no interest in watching an unknown video link.
 
Did I say otherwise?
 
--
Ian.
David Brown <david.brown@hesbynett.no>: Sep 13 10:22AM +0200

On 13/09/2019 09:45, Ian Collins wrote:
 
>> Scott did not say the video was not interesting or relevant - he said he
>> had no interest in watching an unknown video link.
 
> Did I say otherwise?
 
No. My post was as much to Brian and Rick as to you, but I didn't want
to reply to everyone. My apologies if it looked like I was criticising
you. Probably it would have made more sense for my post to be a reply
to Brian's post than to yours.
scott@slp53.sl.home (Scott Lurndal): Sep 13 01:07PM

>had no interest in watching an unknown video link. Remember who posted
>that link - /I/ certainly would not bother clicking on it, given Brian's
>history.
 
That's certainly part of it. I can also read and absorb text much faster
than plowing through some interminable video presentation.
 
And in some working enviroments, I don't have the ability to view video.
 
 
>What Brian should have done is answered Scott's question:
 
>"""
>I think executable size is important because it gives better I-cache hit
 
The relationship between executable size (presuming that any cruft like
exception tables, debug symbol tables, run-time loader symbol tables, .rodata, .data,
rtti tables, etc. are not considered when calculating executable size) and i-cache hit
rate is minor, if it exists at all. Far more important is the size
of the text section 'working set' (to borrow from the old VAX days). There
may be a zillion pages of infrequently used text, which will never get
into the Icache at all during normal program operation.
 
 
 
 
 
>Personally, I still wouldn't watch the video, because I don't find video
>to be a useful medium for that kind of thing. But that's just me -
>other people like them.
 
I'm with you.
Mr Flibble <flibble@i42.removethisbit.co.uk>: Sep 13 02:18PM +0100

On 13/09/2019 04:31, Ian Collins wrote:
> from VmWare) relevant to your work.  He does make a strong case for
> measuring executable size as a measure of code complexity.  Saying that,
> it has no relevance to comparative compiler code sizes...
 
Executable size is a measure of how much text is in the text segment and
how much data in the data segments only, it is NOT a measure of code
complexity at all.
 
/Flibble
 
--
"You won't burn in hell. But be nice anyway." – Ricky Gervais
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens."
– Ricky Gervais
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates
a world that is so full of injustice and pain. That's what I would say."
David Brown <david.brown@hesbynett.no>: Sep 13 03:30PM +0200

On 13/09/2019 15:07, Scott Lurndal wrote:
> of the text section 'working set' (to borrow from the old VAX days). There
> may be a zillion pages of infrequently used text, which will never get
> into the Icache at all during normal program operation.
 
Yes, I know that - I was guessing at some reasons people might have for
preferring small executable sizes. (There are times when there are
other good reasons, of course - in my business, it is often important to
have small total executable size.) I don't know why Brian thinks it is
an important metric for his code - but I'd be interested to hear his
reasoning.
 
woodbrian77@gmail.com: Sep 13 07:32AM -0700

On Friday, September 13, 2019 at 8:18:54 AM UTC-5, Mr Flibble wrote:
 
> Executable size is a measure of how much text is in the text segment and
> how much data in the data segments only, it is NOT a measure of code
> complexity at all.
 
Yeah. If a refactoring leads to a smaller binary size and
fewer lines of code, I'll probably use it.
 
 
Brian
Bonita Montero <Bonita.Montero@gmail.com>: Sep 13 03:10PM +0200

I have some tempalted code that loads a variable of a template-type with
"xxx = T( -1 )". This is of course ok if T is a class with a constructor
which accepts an appropriate type. But I checked this if T is a pointer
and found that this also works. So this here is non-templated sample
code of which I'd like to know if this is valid C++:
 
typedef int *PI;
PI pi;
void f()
{
::pi = PI( -1 );
}
 
But at least this compiles with VC++, gcc and clang.
David Brown <david.brown@hesbynett.no>: Sep 13 03:35PM +0200

On 13/09/2019 15:10, Bonita Montero wrote:
>     ::pi = PI( -1 );
> }
 
> But at least this compiles with VC++, gcc and clang.
 
It is a "functional cast expression". "PI(-1)" is identical in
functionality to "(PI) -1".
 
And like the C-style cast, it has implementation-dependent behaviour.
Bonita Montero <Bonita.Montero@gmail.com>: Sep 13 03:47PM +0200


> It is a "functional cast expression". "PI(-1)" is identical in
> functionality to "(PI) -1".
> And like the C-style cast, it has implementation-dependent behaviour.
 
Sorry, I had tomatoes on my eyes when I posted thi.
The above came into my mind shortly after I posted this.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: