Sunday, June 18, 2017

Digest for comp.lang.c++@googlegroups.com - 12 updates in 1 topic

"Öö Tiib" <ootiib@hot.ee>: Jun 18 04:58AM -0700

On Sunday, 18 June 2017 00:33:18 UTC+3, Jerry Stuckle wrote:
> non-trivial destructor... IF the function call is in the tail
> position... IF you use gcc ... IF you use -O2 or -O3 THEN gcc MIGHT make
> this a iterative call.
 
Red herring. Microsoft's compiler does tail recursion optimization also
with /O2. Now I take Mac and clang does it with -O2 and ICC does it
with -02 there. These are the usual C++ compilers and usual optimization
levels for release builds of those compilers.
 
> true. Any change in the above and it won't be iterative. Counting on
> it being an iterative operation is an invitation to bugs when
> maintenance changes those conditions and the call is no longer iterative.
 
You are making special case here, Jerry. In several ways special case.
 
Indeed there can exist C++ compiler incapable of doing tail recursion optimization. I don't know that compiler. What *special* compiler it is,
Jerry?
 
On all compilers the optimizations can be turned off. That is done on
very *special* cases (typically for to avoid defects in compiler
or in legacy code to manifest). What is your reason to turn it off, Jerry?

There even can be code where compiler can not do that optimization.
Please show that *special* example code Jerry. I am close to certain that
we can refactor it to repair the issue.
leigh.v.johnston@googlemail.com: Jun 18 05:18AM -0700

One shouldn't rely on compiler optimizations to ensure code correctness. Rather than using recursion write an explicit loop if there is likelihood of lots of iterations.
 
/Flibble
"Öö Tiib" <ootiib@hot.ee>: Jun 18 06:08AM -0700

> One shouldn't rely on compiler optimizations to ensure code correctness. Rather than using recursion write an explicit loop if there is likelihood of lots of iterations.
 
What you mean by code correctness? One should not rely on writing code in
one way or other to ensure code correctness. There are sanity checks in
code, unit tests, peer reviews and testing that help to find most defects.
We are all humans and so we are all fallible.
leigh.v.johnston@googlemail.com: Jun 18 06:15AM -0700

One should not write code in a way where correctness isn't guaranteed. Testing and peer review does not guarantee code correctness.
 
/Flibble
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 18 03:46PM +0200

On 18-Jun-17 3:08 PM, Öö Tiib wrote:
>> correctness. Rather than using recursion write an explicit loop if
>> there is likelihood of lots of iterations.
 
> What you mean by code correctness?
 
I think he means, do it iteratively instead of recursive if the depth of
recursion can exceed say 200 (a number that I grabbed from thin air, not
to say vacuum, but I think it's large enough to support common things
like binary tree traversal, at least for them balanced trees).
 
Because if the compiler fails to optimize the recursion, perhaps after
some maintainer has added a "useful" call tracer variable whose
destructor execution foils the optimization, then the correctness of the
code depends on usually not-so-extremely-high implementation limits.
 
Usually only a few MB of stack space are available.
 
On the other hand, when one does employ recursion, then if
tail-recursion is practically possible it's the thing to aim for.
 
In some cases, e.g. the gcd function, I find the recursive version very
easy to comprehend, while the iterative version is more complex to me,
and I have to understand it /in terms of/ the recursive version. So I
think for code clarity some functions should ideally be expressed with
recursion instead of loops. But it depends very much on the function.
 
 
> code in one way or other to ensure code correctness. There are sanity
> checks in code, unit tests, peer reviews and testing that help to
> find most defects. We are all humans and so we are all fallible.
 
I just wish our bodies weren't so fallible. :(
 
 
Cheers!,
 
- Alf
"Öö Tiib" <ootiib@hot.ee>: Jun 18 07:01AM -0700

> One should not write code in a way where correctness isn't guaranteed. Testing and peer review does not guarantee code correctness.
 
Person can guarantee correctness like "money back guarantee" and the like. Scientific proof of correctness can prove it (but not guarantee it). Rules
don't have neither power. Whatever "grammatical" rules can help to avoid
certain errors but at the end there have to remain limitless ways to
express nonsense.

Testing and peer review can however help to find and to repair enough
defects to make the software useful in practice so people will use it.
Jerry Stuckle <jstucklex@attglobal.net>: Jun 18 10:47AM -0400

On 6/18/2017 7:58 AM, Öö Tiib wrote:
>> it being an iterative operation is an invitation to bugs when
>> maintenance changes those conditions and the call is no longer iterative.
 
> You are making special case here, Jerry. In several ways special case.
 
No, I'm not the saying "if this... and if that... then the compiler can
optimize to iteration. And if ANY of those conditions change, the
compiler cannot".
 
That is the definition of "special case".
 
> Indeed there can exist C++ compiler incapable of doing tail recursion optimization. I don't know that compiler. What *special* compiler it is,
> Jerry?
 
Any number of compilers. Even gcc if you don't use -O2 or -O3.
 
Additionally, this can lead to all kinds of bugs. For instance, maybe
you need to change a class so that the destructor is no longer trivial.
Now the compiler can't optimize to an iteration - potentially causing
problems in one or more areas of the code completely unrelated to the
changes made to the destructor.
 
 
> There even can be code where compiler can not do that optimization.
> Please show that *special* example code Jerry. I am close to certain that
> we can refactor it to repair the issue.
 
There are many reasons for turning it off. Debugging is a very common
reason why. And there have been known problems in gcc which show up
with incorrect optimizations.
 
And older compilers (many of which are still in use for various reasons)
may not perform that optimization.
 
The bottom line here is - don't depend on the compiler to perform this
particular iteration. If you want an iterative solution, write an
iterative solution.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Richard Damon <Richard@Damon-Family.org>: Jun 18 02:34PM -0400

On 6/17/17 5:33 PM, Jerry Stuckle wrote:
> true. Any change in the above and it won't be iterative. Counting on
> it being an iterative operation is an invitation to bugs when
> maintenance changes those conditions and the call is no longer iterative.
 
I think I am detecting a difference of definitions.
 
The classic definition of 'tail-recursion' is when a function ends with
a recursive call, and the only thing that needs to be done following is
a return, possibly of the results of the recursive call.
 
Such a call can be (on appropriate architectures) replaced with a bit of
code and a goto, and not need to build additional stack frames.
 
The exceptions quoted above are cases where even if the last statement
is a return statement of the recursive call, and thus appear to the
untrained eye as tail-recursive, the fact that there are some other
implicit actions between the return of the recursive call and the return
from this function, makes it NOT tail-recursive by the definition.
 
If you want to use an alternate (and likely less really useful
definition) of tail-recursive to make something that just appears to be
tail-recursive by the classic definition to be tail-recursive by
definition, you need to be clear on this, and accept that some standard
properties don't hold anymore.
"Öö Tiib" <ootiib@hot.ee>: Jun 18 12:36PM -0700

On Sunday, 18 June 2017 16:46:13 UTC+3, Alf P. Steinbach wrote:
> recursion can exceed say 200 (a number that I grabbed from thin air, not
> to say vacuum, but I think it's large enough to support common things
> like binary tree traversal, at least for them balanced trees).
 
I have seen recursion with depth of thousands but that was naive algorithm.
There if it is loop or recursion does not matter. Correct is to make
better algorithm. Note that balanced binary tree with 200 levels is
*huge*. It has billion times more nodes than there are atoms on Earth.
It can be that we never reach such technologies.
 
> destructor execution foils the optimization, then the correctness of the
> code depends on usually not-so-extremely-high implementation limits.
 
> Usually only a few MB of stack space are available.
 
Yes, the problem with clever algorithms has always been that performance
of those is simple to damage with some cargo cult edit. So if performance
matters then critical parts of program should be covered with automated
stress tests that will help to notice that some maintenance edit has
broken something of it.
 
> and I have to understand it /in terms of/ the recursive version. So I
> think for code clarity some functions should ideally be expressed with
> recursion instead of loops. But it depends very much on the function.
 
I have had opinion that it is worth to avoid recursion before, because it
often resulted with weaker efficiency back then. It has changed. Now I
use recursion when it is easier to understand than loop. Compilers can
optimize well and so readability has higher priority. *When* efficiency is
needed *and* compiler can't achieve it then it can be rearranged manually.
 
> > checks in code, unit tests, peer reviews and testing that help to
> > find most defects. We are all humans and so we are all fallible.
 
> I just wish our bodies weren't so fallible. :(
 
Yes, no one is Spider-man and everyone better be careful. Current direction
is seemingly even towards more fallible. Modern medicine does not let
weaker of us (or the ones with defects) to die as small kids OTOH can't
fully compensate either. Globalization lets all diseases to spread faster
than ever.
Jerry Stuckle <jstucklex@attglobal.net>: Jun 18 04:05PM -0400

On 6/18/2017 2:34 PM, Richard Damon wrote:
> untrained eye as tail-recursive, the fact that there are some other
> implicit actions between the return of the recursive call and the return
> from this function, makes it NOT tail-recursive by the definition.
 
But that cannot be determined from the function. You would have to look
at each object used in the function to determine if that is the case.
Not only is that against OO philosophy, you may not even have access to
the source for those objects. But then neither would the compiler, so
it wouldn't know if the code were tail recursive or not.
 
> tail-recursive by the classic definition to be tail-recursive by
> definition, you need to be clear on this, and accept that some standard
> properties don't hold anymore.
 
There are all kinds of special cases in C++. What I am saying is not to
count on optimizations such as tail recursion. If it's there, that's
nice and can speed up the code as well as save resources. But too many
unrelated actions can change the situation and making the optimization
no longer possible.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
"Öö Tiib" <ootiib@hot.ee>: Jun 18 02:39PM -0700

On Sunday, 18 June 2017 17:47:07 UTC+3, Jerry Stuckle wrote:
> optimize to iteration. And if ANY of those conditions change, the
> compiler cannot".
 
> That is the definition of "special case".
 
Yes if it is ANY of special cases that you bring up then compiler does
not do the optimization.
 
 
> > Indeed there can exist C++ compiler incapable of doing tail recursion optimization. I don't know that compiler. What *special* compiler it is,
> > Jerry?
 
> Any number of compilers. Even gcc if you don't use -O2 or -O3.
 
With gcc these are common options not special options.
 
> Now the compiler can't optimize to an iteration - potentially causing
> problems in one or more areas of the code completely unrelated to the
> changes made to the destructor.
 
When compiler can not turn it into iteration then it can cause worse
performance. Bad performance does not usually manifest itself as
outright defect or bug. Where it does there it must be is quite special
case.
 
 
> There are many reasons for turning it off. Debugging is a very common
> reason why. And there have been known problems in gcc which show up
> with incorrect optimizations.
 
It sounds very special case when performance of debug version is good
enough reason for to do manual optimizations. I can recall such cases
but those were several years ago. Turning all optimizations off sounds
like really special solution since usually it is easier to work with
imperfect tools than just with teeth and toenails.
 
 
> And older compilers (many of which are still in use for various reasons)
> may not perform that optimization.
 
Indeed there were old times when compilers were weaker and we had to
manually optimize more. There still are some teams who have to use these
old compilers. That is not good reason for others to manually optimize
their code.
 
 
> The bottom line here is - don't depend on the compiler to perform this
> particular iteration. If you want an iterative solution, write an
> iterative solution.
 
I did not suggest to depend on such implementation details one way or
other. I would suggest to try to remove that fragility that causes
such dependency on details of algorithm instead.
Jerry Stuckle <jstucklex@attglobal.net>: Jun 18 05:55PM -0400

On 6/18/2017 5:39 PM, Öö Tiib wrote:
>>> Jerry?
 
>> Any number of compilers. Even gcc if you don't use -O2 or -O3.
 
> With gcc these are common options not special options.
 
They are not the defaults. That's key.
 
> performance. Bad performance does not usually manifest itself as
> outright defect or bug. Where it does there it must be is quite special
> case.
 
Not *USUALLY*. However, in this case the difference can be extreme.
Recursion adds to the stack. A change in the code - as in making a
destructor non-trivial - which changes the optimization from iteration
to recursion could cause the program to run out of stack space.
 
> but those were several years ago. Turning all optimizations off sounds
> like really special solution since usually it is easier to work with
> imperfect tools than just with teeth and toenails.
 
Not really. In fact, every project I've ever been on starts with
unoptimized code. Once that is working, we check to see if it works
optimized. You'd be surprised how many times compiler bugs have shown
up in the optimization code - even in gcc.
 
> manually optimize more. There still are some teams who have to use these
> old compilers. That is not good reason for others to manually optimize
> their code.
 
I didn't say anything about manually optimizing the code. What I said
was if you need an iterative solution, write an iterative solution.
Don't depend on the compiler to optimize it.
 
 
> I did not suggest to depend on such implementation details one way or
> other. I would suggest to try to remove that fragility that causes
> such dependency on details of algorithm instead.
 
But that is exactly what you are arguing.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: