Monday, June 17, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 6 topics

Juha Nieminen <nospam@thanks.invalid>: Jun 17 12:30PM

> The solution for extra safety is is not to use recursion and dynamic allocations.
 
Some algorithms require either one.
 
(Some algorithms, even quite commonly used ones, are provably recursive,
meaning that they cannot be done iteratively. You can "emulate" the recursion
stack by creating your own stack data structure instead of relying on actual
recursive function calls, but it's still a recursive algorithm that requires
an amount of extra memory proportional to the amount of data to be processed.
An iterative algorithm is one that only requires a fixed amount of memory
regardless of the size of the data to be processed.)
Juha Nieminen <nospam@thanks.invalid>: Jun 17 12:41PM

> Visual C++ stubbornly refused to optimize tail recursion floating point
> type results. So if one relies on tail recursion in C++ one should
> better know what one is doing, and know one's compilers.
 
If relaying on such compiler optimizations, one should actually check
what kind of asm the compiler is producing from the code, for the
current particular target.
 
Of course this is not a safe bet if targeting multiple platforms
(even when using the same compiler, eg. gcc.) For example what may
become optimized for x86-64 might not become equally optimized
for ARM64.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 17 09:01AM -0400

On 6/16/19 6:35 AM, Chris Vine wrote:
...
> destructors still remain to execute. I cannot see how an optimizer can
> get around this where the objects concerned are other than trivial (in
> the C++-standard sense of that word).
 
You've already said the key phrase - but missed the implications. A
destructor might be non-trivial, yet calling it outside of the correct
order might also have no effect on the observable behavior of the
program. I suspect it's rare for non-trivial destructors to qualify for
such optimizations, but it's not impossible.
 
If that's the case, and particularly if the destructor is inline, an
implementation could inline the destructor call and execute the body of
the destructor before the recursive call.
 
> code where tail call elimination is applied with non-trivial objects in
> local scope? If you can, a few lines of code should suffice to
> demonstrate this.
 
Note that since this is only allowed if it has no effect on the
observable behavior of the program, the best way to prove that
tail-recursion optimization has been performed would be to look that the
generated code, not at the behavior of that code. Running out of stack
space has undefined behavior, and does not qualify as observable
behavior, so you can't prove the optimization was performed just by
"observing" that it didn't run out of stack space.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 17 04:46PM +0100

On Mon, 17 Jun 2019 09:01:48 -0400
 
> If that's the case, and particularly if the destructor is inline, an
> implementation could inline the destructor call and execute the body of
> the destructor before the recursive call.
 
I hadn't missed the point. I just can't see that ever happening in
practice for an object which is not trivial (in the C++ sense), given
that the observable effects must be the same as if the destructor(s) had
executed after the recursive call(s). Optimization in such
circumstances is so theoretical as (in my view) likely to be
impossible. However, I was willing to be persuaded by actual code which
does it (see below).
 
> space has undefined behavior, and does not qualify as observable
> behavior, so you can't prove the optimization was performed just by
> "observing" that it didn't run out of stack space.
 
The way I would have been persuaded is by someone producing code
containing a recursive call with non-trivial object(s) remaining in
local scope which, when compiled to assembler (gcc -S), reuses the
existing stack frame, instead of constructing a new one and emitting a
call instruction. That should be trivial to see from the assembly code
emitted by the compiler.
 
Whilst that would persuade me that call elimination optimization which
is out of tail position is possible, I would still not want to rely on
it. It would be freak code.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 17 05:01PM +0100

On Mon, 17 Jun 2019 16:46:11 +0100
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> wrote:
[snip]
 
> Whilst that would persuade me that call elimination optimization which
> is out of tail position is possible, I would still not want to rely on
> it. It would be freak code.
 
By the way, the following would not persuade me, because the compiler
could elide construction of the string on the grounds that it is never
used and then eliminate the tail call[1]. It would have to be code
which actually constructs the object in local scope when optimized and
does something with it.
 
void recfunc() {
std::string s;
recfunc();
}
 
int main() {
recfunc(); // does this loop for ever or bust the stack?
}
 
[1] Surprisingly, neither gcc nor clang does so, even at -O3.
Paavo Helde <myfirstname@osa.pri.ee>: Jun 17 07:19PM +0300

On 17.06.2019 19:01, Chris Vine wrote:
> could elide construction of the string on the grounds that it is never
> used and then eliminate the tail call[1].
 
> [1] Surprisingly, neither gcc nor clang does so, even at -O3.
 
I was recently surprised to see that MSVC++2017 with full optimizations
did not optimize away the string construction in this scenario when
foo<std::string>() was called:
 
template<typename T> bool foo(T x) {return false;}
template<> bool foo<double>(double x) {return isnan(x);}
 
The memory allocator code lit up in the profiler. This was easy to fix
by using pass by reference instead, but how comes the optimizer is so
stupid? Or is there any hidden reason why it could not eliminate a
useless object copy?
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 17 05:49PM +0100

On Mon, 17 Jun 2019 19:19:44 +0300
> by using pass by reference instead, but how comes the optimizer is so
> stupid? Or is there any hidden reason why it could not eliminate a
> useless object copy?
 
It is odd, although the correct elimination of the construction of an
unused argument may be more difficult for the compiler to prove than the
correct elimination of the construction of an unused local object: the
observable behaviour has to be the same as if the argument were
constructed (I think).
 
Going back to unused local objects, it really surprised me that even at
-O3, this busts the stack with gcc and clang:
 
void recfunc() {
std::string s;
recfunc();
}
 
int main() {
recfunc(); // does this loop for ever or bust the stack?
}
 
But this eliminates the call instruction and emits a jump instruction
instead (so looping forever).
 
void recfunc() {
recfunc();
}
 
int main() {
recfunc(); // does this loop for ever or bust the stack?
}
 
Does MSVC do any better with the first one?
Paavo Helde <myfirstname@osa.pri.ee>: Jun 17 08:36PM +0300

On 17.06.2019 19:49, Chris Vine wrote:
> recfunc(); // does this loop for ever or bust the stack?
> }
 
> Does MSVC do any better with the first one?
 
Nope, the compiler warns:
 
warning C4717: 'recfunc': recursive on all control paths, function will
cause runtime stack overflow
 
And then dutifully goes and produces a stack overflow.
 
The second version issues the same warning, but then goes into infinite
jmp loop instead.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 17 07:11PM +0100

On Mon, 17 Jun 2019 20:36:55 +0300
> On 17.06.2019 19:49, Chris Vine wrote:
[snip]
 
> And then dutifully goes and produces a stack overflow.
 
> The second version issues the same warning, but then goes into infinite
> jmp loop instead.
 
Interesting.
 
Presumably the "thinking" of gcc, clang and MSVC is that because the
destructor of std::string might have an observable effect, and because
it is mandated by the C++ standard that any such effect must take place
after the recursive call, it must construct a std::string object even
though it knows that the object is not otherwise used, and therefore
also must not eliminate the recursive call instruction.
 
That's pretty pathetic. std::basic_string is a template and the
bodies of the constructor and destructor must surely therefore be
visible to the compiler at compile time, which can see that there is no
effect other than the construction and destruction of the string
object itself.
 
It also demonstrates the improbability of call elimination being carried
out where there is a non-trivial object in local scope which _is_
actually used (so that the call cannot be in tail position).
Paavo Helde <myfirstname@osa.pri.ee>: Jun 17 10:28PM +0300

On 17.06.2019 19:49, Chris Vine wrote:
> correct elimination of the construction of an unused local object: the
> observable behaviour has to be the same as if the argument were
> constructed (I think).
 
I do not see how this might be the case. A local variable construction
of an otherwise unused object might be very significant (think of a
mutex lock object). So it cannot be optimized away unless the compiler
can really prove there is no change in observable behavior.
 
There are only limited situations where the standard allows to optimize
away code even with potentially observable behavior, and these are all
related to copy constructor elision. An unused argument might certainly
be constructed by a copy constructor, but by some reason this is not
listed in the standard as an allowed scenario for copy ctor elision.
 
So in both cases the observable behavior has to be retained, and I do
not see much difference in these scenarios.
 
I guess the reason why std::string construction cannot be optimized away
is that it potentially calls memory allocation and deallocation routines
in its ctors and dtor, which from the compiler viewpoint could have
unknown side effects.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 17 09:09PM +0100

On Mon, 17 Jun 2019 22:28:02 +0300
> of an otherwise unused object might be very significant (think of a
> mutex lock object). So it cannot be optimized away unless the compiler
> can really prove there is no change in observable behavior.
 
I agree it would be very difficult to optimize out a mutex lock object,
because mutex locking definitely has observable side-effects - it
causes memory to synchronize and might cause the emission of a fence
instruction. That is not the case here.
 
> listed in the standard as an allowed scenario for copy ctor elision.
 
> So in both cases the observable behavior has to be retained, and I do
> not see much difference in these scenarios.
 
As you say, copy elision is a special permission given by the C++
standard which applies notwithstanding that the elision might be
observable. Optimization elision is relevant where there aren't any
observable effects (where the observable behaviour of the program does
not differ depending on whether or not the object concerned is
constructed).
 
Copy elision at any rate used to apply to (a) certain return statements,
and (b) certain object initializations. In C++17 as I understand it the
first of those is now mandatory where the return statement is formed
from a prvalue (a temporary). These are not relevant to the cases we
have been discussing.
 
> is that it potentially calls memory allocation and deallocation routines
> in its ctors and dtor, which from the compiler viewpoint could have
> unknown side effects.
 
So I agree with you except on two points. First, I don't think
allocating and deallocating memory counts as an observable side
effect. It is a feature of the language. The fact that it might lead
to an exhaustion of memory for other allocations doesn't seem to me to
count as an observable effect. If I am wrong it explains why the
compilers would not optimize out the string object in the code we have
discussed, but I would take a lot of persuading that memory
allocation does count as a side effect in this way (but I don't mind
being proved wrong).
 
Secondly, I still suspect that it could be more difficult for a compiler
to establish that an unused argument has no side effects than an unused
local object. But I am not a compiler writer and I defer on that to
those who are.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 17 09:36PM +0100

On Mon, 17 Jun 2019 21:09:58 +0100
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> wrote:
[snip]
> discussed, but I would take a lot of persuading that memory
> allocation does count as a side effect in this way (but I don't mind
> being proved wrong).
 
It follows that I can envisage a case where the existence of a
std::string object in local scope does not preclude call elimination
optimization even where the string is used, provided that the string is
not used in connection with the provision of an argument for the
recursive call (using it as an argument would prevent early
destruction).
 
I suppose one could think of this as the Socratic method in operation
but I not entirely happy with that conclusion.
sergstrukovlink@gmail.com: Jun 17 08:53AM -0700

CCore-3-60 is released and available on
 
https://github.com/SergeyStrukov/CCore-3-xx/releases
 
Here is a brief notes on the CCore project
 
http://sergeystrukov.github.io/CCore-Sphinx-3-60/brief/brief.html
 
Enjoy!
Keith Thompson <kst-u@mib.org>: Jun 17 12:51PM -0700


> https://github.com/SergeyStrukov/CCore-3-xx/releases
 
> Here is a brief notes on the CCore project
 
> http://sergeystrukov.github.io/CCore-Sphinx-3-60/brief/brief.html
 
I suggest that your "brief.html" could be improved if it included,
at the top, a brief description of *what CCore is*. I'm sure the
information there is useful to people who are already using it and
want to know what (new) features it provides, but it's less useful
to those of us who hadn't necessarily heard of it before.
 
From the GitHub page https://github.com/SergeyStrukov/CCore-3-xx :
 
CCore is a development platform.
Its primary purpose is a real-time embedded development,
but it also a great platform for a usual host development.
 
CCore gives a more professional language support for C++ development,
than a standard C++ library, with greater attention to details of
implementation, efficiency, robustness and derivability.
It is also more "encyclopedic".
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */
"Öö Tiib" <ootiib@hot.ee>: Jun 17 01:27AM -0700

On Sunday, 16 June 2019 18:25:19 UTC+3, Melzzzzz wrote:
> On 2019-06-16, alexo <alelvb@inwind.it> wrote:
 
...
 
> > -std=c++17 -pedantic -Wall -Wextra
 
...
 
> > What could it be the problem with the double[] type?
 
> Probably shared_pointer can't point to array.
 
The shared_ptr works for raw arrays sort of finely but
make_shared does not. The C++20 might (is planned to)
add array support of make_shared. One should read
documentation on preliminary/experimental support
of C++20 of their compiler to figure out extent of
that and since it can be buggy like usually I would
not allow it into product code.
 
I would (even after C++20) to continue using std::vector or
std::array for all arrays (depending if the size of array is
dynamic or fixed). The shared_ptr is for to achieve shared
ownership of pointed at object (that can indeed be std::vector
or std::array, but that is orthogonal then).
 
Shared ownership is something that is in actual reality
very rarely needed complication. Masses of shared_ptr in
code are (as kind of rule) preliminary pessimizations and
over-engineerings from novice programmers.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 17 12:19PM +0200

On 17.06.2019 10:27, Öö Tiib wrote:
> very rarely needed complication. Masses of shared_ptr in
> code are (as kind of rule) preliminary pessimizations and
> over-engineerings from novice programmers.
 
It's a necessary evil.
 
As I recall, the Boost folks chose shared_ptr as a one-size-fits-all
universal potato thing, rather than the complications of something like
Andrei's policy-based customizable smart pointers.
 
Strong typing will always involve a conflict between ideal restrictions,
and the mass of complexity a programmer can be expected to know about
and deal with in order to use a near ideal fit abstraction in each case.
 
Oh, that reminds me, it's about the same issue right now with newspaper
pay-walls in Norway. Nearly all the little local newspapers in Norway
now place nearly all interesting stuff behind pay-walls. As if each
assumes that people all over the country will pay /them/. The cost each
time that you're interested in something is reasonable, sort of. But not
the accumulated cost when you take that kind of cost every day, say.
 
The result of that high total cost, the accumulated cost, is that
everybody suffer from reduced traffic and hence, reduced ads revenue.
And in development it's the same: too high cost in terms of programmer
education and thinking/design time (plus belatedly realizing that one
chose a slightly wrong path, so necessary to undo efforts and do again),
well it doesn't help, even if in each concrete little case the cost of
the relevant abstraction, in learning and application and runtime
overhead etc., is negligible compared to the savings in that case.
Hence a cheap all-round potato solution is desirable. std::shared_ptr.
 
 
Cheers!,
 
- Alf
alexo <alelvb@inwind.it>: Jun 17 02:40PM +0200

Il 17/06/19 10:27, Öö Tiib ha scritto:
 
 
> of C++20 of their compiler to figure out extent of
> that and since it can be buggy like usually I would
> not allow it into product code.
 
I'm at a tutorial level of programming in C++. I Self teached
C++ 2003 after having played a bit with C. But I'm just a hobbyst
programmer.
 
I just wanted to try the code found in the text book:
 
Beginning C++17 5th edition
by Ivor Horton and Peter Van Weert
Apress
 
make_unique<> is reported as working with a continuous block of
dynamically allocated memory. Mine code was just an attempt written for
symmetry. The text doesn't mention the same possibility with shared_ptr<>.
 
if you change the line 13:
 
shared_ptr<double[]> pdata1 {make_shared<double[]>(n)}
 
with the following:
 
shared_ptr<double[]> pdata1 {new double[n]};
 
the code compiles fine.
 
Does the memory shared by the two pointer
objects pdata1 and pdata2 get released automatically even if I don't
make use of make_shared? It should be so but I ask to people more
experts in programming than me.
 
Thank you
"Öö Tiib" <ootiib@hot.ee>: Jun 17 08:12AM -0700

On Monday, 17 June 2019 15:40:54 UTC+3, alexo wrote:
 
> with the following:
 
> shared_ptr<double[]> pdata1 {new double[n]};
 
> the code compiles fine.
 
Yes. That (AFAIK since C++17) should work fine.
 
> objects pdata1 and pdata2 get released automatically even if I don't
> make use of make_shared? It should be so but I ask to people more
> experts in programming than me.
 
I'm bit confused since I see nothing named "pdata2".
There is difference between usage of make_shared and new
exactly in that life-time and memory management.
 
With new the managed object is allocated before calling of
constructor and then manager block is allocated separately.
Because of that with new the object can be deleted (IOW
destructor and deallocator combined called) when shared
count reaches zero while the memory for manager block can
be released when also weak count reaches zero.
 
With make_shared there is however only one allocation (as
optimization) for both managed object and manager block.
Therefore the destructors (but not deallocators) should
be called when shared count reaches zero and the memory for
both should be released when also weak count reaches zero.
That wasn't implemented for arrays in C++17 but likely will
be in C++20.
 
However I still would avoid using raw array and use std::vector
or std::array instead. It typically wins in robustness and
simplicity and often even in performance.
alexo <alelvb@inwind.it>: Jun 17 06:14PM +0200

Il 17/06/19 17:12, Öö Tiib ha scritto:
 
>> shared_ptr<double[]> pdata1 {new double[n]};
 
>> the code compiles fine.
 
> Yes. That (AFAIK since C++17) should work fine.
 
OK
 
 
> I'm bit confused since I see nothing named "pdata2".
> There is difference between usage of make_shared and new
> exactly in that life-time and memory management.
 
I was referencing to the source code I wrote in a precedent post.
Look at the beginning of this
> However I still would avoid using raw array and use std::vector
> or std::array instead. It typically wins in robustness and
> simplicity and often even in performance.
 
Understood.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 17 08:35AM -0400

On 6/15/19 1:58 PM, Robert Wessel wrote:
> On Fri, 14 Jun 2019 08:31:33 -0400, James Kuyper
> <jameskuyper@alumni.caltech.edu> wrote:
...
>> multiple of 2), and on such systems 3.0/2.0 == 1.5.
 
> FSVO "real". Ternary FP has existed on at least emulated (ternary)
> systems.
 
I mentioned ternary systems further down in the same message. I did not
include them in the above comment, because it was specifically about
FLT_RADIX, a feature of the C standard library, and I know of no
implementation of C targeting a ternary platform (which doesn't mean
that there isn't one).
nobody@example.org (Scott): Jun 17 03:02AM

On Sun, 16 Jun 2019 14:24:07 -0700, Keith Thompson <kst-u@mib.org>
wrote:
>diagnostics. In this mode, it should be a conforming C99 compiler.
 
>"gcc -std=c99 -pedantic-errors" turns all required diagnostics into
>fatal errors.
 
Ah yes, that accounts for it. Thank you for checking.
Bonita Montero <Bonita.Montero@gmail.com>: Jun 17 06:31AM +0200

>> In Java / .net that's different because the definition is checked.
 
> How is that relevant?
 
Ths thread is about C# vs. C/C++.
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 17 08:30AM -0400

On 6/15/19 9:58 PM, Scott wrote:
...
> C does allow implicit declarations.
 
Not anymore
 
You can call a previously
> undeclared function, and C will trust that the types you're passing
> are the types the function's expecting.
 
It also assumed that the return type was "int". If the definition of the
function returned a type incompatible with "int", the behavior was
undefined.
 
> I think it's a bad idea, ...
 
So did my instructor in my first C class, in 1979. So do I. So did the
designer of C++, which is why implicit int has never been a feature of
C++. So did the C committee, which is why they removed it in C99, which
became the new official version of the C standard two decades ago.
 
> ... and
> it usually yields warnings suggesting that it's a bad idea.
 
Some compilers provided such warnings even before C99 came out. Of
course, you might still receive only warnings. The only thing that you
can put in a C program that would require a conforming implementation of
C to fail to translate your program is a #error directive, which
explicitly instructs it to fail. That's why wise C programmers pay
attention to all warnings, even the non-fatal ones.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 17 01:02AM +0100

> against Jesus Christ.
 
> Time is almost up fir this age of the gentiles. Next up is the
> rapture, and the seven year Tribulation begins.
 
Cool story bro. Also, Satan invented fossils, yes?
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
rick.c.hodgin@gmail.com: Jun 16 06:36PM -0700

On Sunday, June 16, 2019 at 8:02:43 PM UTC-4, Mr Flibble wrote:
> Cool story bro. Also, Satan invented fossils, yes?
 
You'll find out soon enough, Leigh.
 
--
Rick C. Hodgin
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: