Tuesday, June 20, 2017

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

Jerry Stuckle <jstucklex@attglobal.net>: Jun 19 05:44PM -0400

On 6/19/2017 3:50 PM, Chris Vine wrote:
>> majority of one.
 
> A majority of one? I don' think so. Most of your "discussions" end
> similarly. You let yourself down at the end.
 
ROFLMAO! Lusers always try to blame the messenger.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Jun 19 08:25PM -0400

On 6/19/2017 5:58 PM, Chris Vine wrote:
>>> similarly. You let yourself down at the end.
 
>> ROFLMAO! Lusers always try to blame the messenger.
 
> Quod erat demonstrandum
 
I should also add - that is a phrase you should be well familiar with.
Lusers use it regularly.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Tim Rentsch <txr@alumni.caltech.edu>: Jun 20 05:20AM -0700

>> above work, or is more explanation needed?
 
> Whether it is head, middle or tail recursion makes little
> difference in the amount of memory consumed. [...]
 
The tail-recursive list reversal shown above uses just one stack
frame to reverse a list of arbitrary length.
 
Here is the non-tail-recursive reversal function of the OP,
cleaned up slightly and with the printf() calls removed:
 
void
RecursiveReverse( Node **headRef ){
// Base case
if( *headRef == NULL )
return;
 
Node *first = *headRef;
 
Node *rest = (*headRef)->next;
 
if( rest == NULL ){
return;
}
 
RecursiveReverse( & rest );
 
first->next->next = first;
 
first->next = NULL;
// rest always point to the last
*headRef = rest;
}
 
which compiles to (again under gcc -Os)
 
_Z16RecursiveReversePP9list_node:
pushq %rbp
pushq %rbx
subq $24, %rsp
movq (%rdi), %rbx
movq %fs:40, %rax
movq %rax, 8(%rsp)
xorl %eax, %eax
testq %rbx, %rbx
je .L16
movq (%rbx), %rax
testq %rax, %rax
movq %rax, (%rsp)
je .L16
movq %rdi, %rbp
movq %rsp, %rdi
call _Z16RecursiveReversePP9list_node
movq (%rbx), %rax
movq %rbx, (%rax)
movq (%rsp), %rax
movq $0, (%rbx)
movq %rax, 0(%rbp)
.L16:
movq 8(%rsp), %rax
xorq %fs:40, %rax
je .L19
call __stack_chk_fail
.L19:
addq $24, %rsp
popq %rbx
popq %rbp
ret
 
which uses one stack frame per list element of the list being
reversed. (Notice the 'call' instruction in the middle of the
generated assembly.) A list with a million elements means a
million stack frames.
scott@slp53.sl.home (Scott Lurndal): Jun 20 12:50PM

>reversed. (Notice the 'call' instruction in the middle of the
>generated assembly.) A list with a million elements means a
>million stack frames.
 
On the other hand, this function uses 48 bytes of stack, so
a million element recursion would use 48MB. While this is
ten times the memory of a VAX 11/780, it's in the noise on
a 32GByte server.
 
On the gripping hand, the default maximum stack size on Redhat 6
is 10MB.
 
$ ulimit -s
10240
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 20 03:41PM +0200

On 20-Jun-17 2:20 PM, Tim Rentsch wrote:
 
> The tail-recursive list reversal shown above uses just one stack
> frame to reverse a list of arbitrary length.
 
You can't rely on that, the optimization is not guaranteed by the
standard, so when the number of items can be in the millions (your use
case) just /don't do that/.
 
Express the iteration directly, and guaranteed.
 
For guaranteed correctness trumps clarity.
 
 
> reversed. (Notice the 'call' instruction in the middle of the
> generated assembly.) A list with a million elements means a
> million stack frames.
 
Yes, but for a list with a million item /don't use recursion/ in C++.
 
It's not guaranteed correct, because tail recursion optimization isn't
guaranteed in C++. As I mentioned earlier, even an apparently innocent
thing like `double` result instead of `int` can foil such optimization
with certain compilers; it's essentially not predictable, and I've never
seen it documented. My example was Visual C++ with a recursive gcd.
 
 
- Alf
Jerry Stuckle <jstucklex@attglobal.net>: Jun 20 10:00AM -0400

On 6/20/2017 8:20 AM, Tim Rentsch wrote:
>> difference in the amount of memory consumed. [...]
 
> The tail-recursive list reversal shown above uses just one stack
> frame to reverse a list of arbitrary length.
 
That is true - for this special case. But changes in other areas of the
code, such as making a destructor non-trivial, will change this code to
be recursive instead of iterative.
 
Additionally, there is nothing in the C++ standard requiring such
optimization. The optimization is compiler (and compiler option)
dependent; other compilers (or even other versions of the same compiler)
and other options can change this.
 
The result is you should take advantage of such optimizations, but never
*depend* on those optimizations. Any code which depends on
optimizations to work correctly can fail after any recompile.
 
> reversed. (Notice the 'call' instruction in the middle of the
> generated assembly.) A list with a million elements means a
> million stack frames.
 
OTOH, a minor change to your original code:
 
typedef struct list_node *List;
 
struct list_node {
List next;
char value;
};
 
class Log {
public:
Log();
~Log();
};
 
static List
reverse_onto( List s, List r ){
List t;
Log l;
return s ? t = s->next, s->next = r, reverse_onto( t, s ) : r;
}
 
void
reverse( List &s ){
s = reverse_onto( s, 0 );
}
 
Note the addition of the Log class object to the function. Now the
generated assembler is:
 
_ZL12reverse_ontoP9list_nodeS0_:
.LFB0:
pushq %rbp
pushq %rbx
movq %rdi, %rbx
movq %rsi, %rbp
subq $24, %rsp
leaq 15(%rsp), %rdi
.LEHB0:
call _ZN3LogC1Ev
.LEHE0:
testq %rbx, %rbx
je .L4
movq (%rbx), %rdi
movq %rbx, %rsi
movq %rbp, (%rbx)
.LEHB1:
call _ZL12reverse_ontoP9list_nodeS0_
.LEHE1:
movq %rax, %rbx
.L2:
leaq 15(%rsp), %rdi
.LEHB2:
call _ZN3LogD1Ev
.LEHE2:
addq $24, %rsp
movq %rbx, %rax
popq %rbx
popq %rbp
ret
 
Note that even though the function is still tail recursive, an iterative
solution cannot be generated.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Tim Rentsch <txr@alumni.caltech.edu>: Jun 20 07:28AM -0700


> One shouldn't rely on compiler optimizations to ensure code
> correctness. Rather than using recursion write an explicit loop if
> there is likelihood of lots of iterations.
 
[..and also..]
 
> One should not write code in a way where correctness isn't
> guaranteed. Testing and peer review does not guarantee code
> correctness.
 
Point one: the concern here is not one of correctness but
one of resource requirements.
 
Point two: whether one should rely on "compiler optimizations"
to achieve a desired effect depends on the particular case
involved. It is just as wrong to never rely on what a compiler
will do as it is to always rely on what a compiler will do.
 
Point three: in most environments it is easy to perform a static
automated check to verify that the code produced will behave as
expected for functions like the tail-recursive example shown
upthread. (And more complicated examples aren't that much
harder.) This check provides the needed guarantee, without the
need of any test cases or peer review.
Jerry Stuckle <jstucklex@attglobal.net>: Jun 19 03:46PM -0400

On 6/19/2017 3:14 PM, Chris Vine wrote:
>> already made, it seems best to end this by concluding that you have
>> a problem understanding plain English.
 
> A weak response: you were doing OK but you have now let yourself down.
 
The truth hurts, doesn't it? But you'd never admit you have a problem
with basic understanding. Others with whom I have discussed this and
similar topics of the years don't have that problem. You seem to be a
majority of one.
 
> with whatever vicarious perfection comes with it. Assuming that's
> right for you, expressing oneself poorly on occasions comes with the
> territory. Get over it.
 
Nope, but I am claiming I explained things as you could find in "C for
Dummies". Too bad you can't understand it.
 
Get over it.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 20 05:00PM +0200

On 20-Jun-17 4:28 PM, Tim Rentsch wrote:
>> correctness.
 
> Point one: the concern here is not one of correctness but
> one of resource requirements.
 
When you exceed available machine stack space you get UB.
 
Correctness is therefore not a separate issue from resource requirements.
 
It's not separate even with a single compiler or small set of compilers,
for a new version or different options, or really anything (since there
is no guarantee), can foil the crucial optimization of tail recursion.
 
 
> to achieve a desired effect depends on the particular case
> involved. It is just as wrong to never rely on what a compiler
> will do as it is to always rely on what a compiler will do.
 
This seems reasonable.
 
 
> upthread. (And more complicated examples aren't that much
> harder.) This check provides the needed guarantee, without the
> need of any test cases or peer review.
 
How much work is it to implement such a check?
 
It would be nice if you could provide one, to put this argument on a
more solid foundation.
 
 
- Alf
cross@spitfire.i.gajendra.net (Dan Cross): Jun 20 03:03PM

In article <20170619171157.387448ae@bother.homenet>,
>On Mon, 19 Jun 2017 09:30:03 -0400
>Jerry Stuckle <jstucklex@attglobal.net> wrote:
>> A tail call can *NOT* always be optimized to an iterative solution,
 
I plonked Stuckle some time ago, but this point deserves rebuttal since
it is simply wrong. A true *tail call* can always be optimized away and
replaced by a goto. However, something like a non-trivial destructor
means that the function is no longer making a tail-call.
 
>> change the result to a non-iterative solution. And when that happens,
>> there is no change in the amount of memory consumed by the tail
>> recursion vs. other recursions.
 
By definition, in that case it is no longer tail recursive. Perhaps that
is what you are trying to say, but you are conflating terminology in a
confused way.
 
>such as tail call elimination - it seems best to end this by concluding
>that you express yourself poorly on occasions, of which the posting to
>which I have referred is one.
 
My sincere recommendation is to just plonk the guy and ignore him.
 
- Dan C.
Ben Bacarisse <ben.usenet@bsb.me.uk>: Jun 20 04:14PM +0100


> On 6/20/2017 8:20 AM, Tim Rentsch wrote:
>> Jerry Stuckle <jstucklex@attglobal.net> writes:
 
>>> On 6/16/2017 11:42 AM, Tim Rentsch wrote:
<snip>
>>>> taken when 's' is non-null. Notice how the last thing done in
>>>> this case, and the value returned from the function, is the
>>>> result of the recursive call to reverse_onto().
 
<snip>
> Log l;
> return s ? t = s->next, s->next = r, reverse_onto( t, s ) : r;
> }
 
A minor but significant change: the reverse_into call is no longer a
tail call.
 
> reverse( List &s ){
> s = reverse_onto( s, 0 );
> }
 
<snip>
> Note that even though the function is still tail recursive, an iterative
> solution cannot be generated.
 
I would not call this function tail recursive. I think your point would
be better made without saying that. I think you point is that seemingly
innocent chances can have a dramatic effect on what is and is not a call
call.
 
--
Ben.
cross@spitfire.i.gajendra.net (Dan Cross): Jun 20 03:27PM

In article <kfnbmpo6irj.fsf@x-alumni2.alumni.caltech.edu>,
>are turned into loops rather than calls. Here is a short example
>of how to implement list reversal using tail recursion:
 
> [snip]
 
Much ado has been made in this thread about tail recursion and
optimization, yet it seems to me that a perfectly readable iterative
list-reversal algorithm that runs in linear time and constant space
is being overlooked.
 
My suggestion would be to ditch the recursive functions and instead
just implement this:
 
typedef struct node Node;
struct node {
Node *next;
int value;
};
 
Node *
Reverse(Node *head)
{
Node *list, *prev;
 
prev = nullptr;
list = head;
while (list != nullptr) {
node *current = list;
list = list->next;
current->next = prev;
prev = current;
}
 
return prev;
}
 
Of course, doing so would be contrary to the comp.lang.c++ spirit
of arguing minutae to death, so feel free to ignore me. :-)
 
- Dan C.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 20 09:18AM -0700


> Now, for THIS problem, the recursive solution is probably harder to
> understand than the iterative solution (maybe if the recursive
> algorithm was documented it might be clearer),
 
Personally I find a tail-recursive definition (even the very
short one I gave upthread) easier to understand than an iterative
one. I will acknowledge though that I am used to reading and
writing such functions recursively like this, and other people
are often less familiar with a recursive style. I think anyone
who wants to be an accomplished developer should at least aspire
to being able to read either style with roughly equal ease.
 
> all the arguments about efficiency are irrelevant), in which case,
> perhaps part of the exercise should be to write up documentation for
> the function explaining HOW it does its work.
 
An excellent suggestion. If I could go back I would definitely
have included some comment along these lines as part of my
example.
Jerry Stuckle <jstucklex@attglobal.net>: Jun 20 12:55PM -0400

On 6/20/2017 11:14 AM, Ben Bacarisse wrote:
>> }
 
> A minor but significant change: the reverse_into call is no longer a
> tail call.
 
OK, but now let's change Log to:
 
class Log {
public:
void func();
};
 
And add to the recursive function
 
l.func();
 
just to keep the compiler from optimizing the object out completely.
 
Now suddenly this function does optimize to an iteration - with no real
change to the function itself. And if you add the constructor and
destructor back to the class definition - notice NO CHANGE TO THE
FUNCTION AT ALL - the function once again cannot be optimized to an
iteration.
 
> be better made without saying that. I think you point is that seemingly
> innocent chances can have a dramatic effect on what is and is not a call
> call.
 
As far as the function itself goes, there are no calls after the
recursion, so it is tail recursive. The presence or absence of
constructors and destructors is invisible to the function - but an have
a severe effect on the optimization of the code.
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Jerry Stuckle <jstucklex@attglobal.net>: Jun 20 12:56PM -0400

On 6/20/2017 11:27 AM, Dan Cross wrote:
 
> Of course, doing so would be contrary to the comp.lang.c++ spirit
> of arguing minutae to death, so feel free to ignore me. :-)
 
> - Dan C.
 
Exactly. If you want an iterative solution, code an iterative solution.
Don't code a recursive solution and expect the compiler to make it
iterative for you. That may or may not happen.
 
 
--
==================
Remove the "x" from my email address
Jerry Stuckle
jstucklex@attglobal.net
==================
Tim Rentsch <txr@alumni.caltech.edu>: Jun 20 10:27AM -0700

>> there is likelihood of lots of iterations.
 
> Certainly not for a language like C or C++ where there is no guarantee
> that the compiler does anything like tail recursion. [...]
 
I find the brouhaha raised in the thread here to be somewhat
amusing. I routinely write simple functions using tail
recursion, finding them easier to read, easier to write, and
easier to understand. IME compilers have no difficulty turning
such functions into simple loops internally, and that has been
true for well over a decade. (Probably also before that, but
I don't have any earlier data readily available.)
 
I remember a time when programmers were told not to multiply by a
power of two but to use a shift instead. These days the advice
is just the opposite - use a multiply operation to multiply, and
rely on the compiler to do whatever micro-optimization is called
for.
 
Going farther back, I remember a time when there was debate over
whether to program in a "high-level language" (not really very
high level back then) or assembly language. In retrospect it
seems obvious, but at the time there were serious proponents of
the assembly language choice. Eventually the old-timers were
left behind, and as usual the rest is history.
 
Fast forward almost 50 years, and the same kind of debates are
happening. It's only a matter of time (and maybe that time is
already here) before compilers will perform these transformations
so reliably that new developers won't give it a second thought.
And as often happens in "scientific revolutions", many old-timers
won't ever change their minds but just become less and less
relevant as they slowly die off.
 
In the meantime I am happy to make my comments and observations
and sit back and enjoy the show. :)
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 20 07:54PM +0100

On 20/06/2017 15:28, Tim Rentsch wrote:
>> correctness.
 
> Point one: the concern here is not one of correctness but
> one of resource requirements.
 
If a stack fault occurs then the code is incorrect.
 
> to achieve a desired effect depends on the particular case
> involved. It is just as wrong to never rely on what a compiler
> will do as it is to always rely on what a compiler will do.
 
There is no guarantee that compilers will optimize the same code the
same way when other changes to other parts of the codebase happen (e.g.
whole program optimizations and link time optimizations). Also,
compiler settings may be changed for a variety of different reasons
which can cause the code to become incorrect.
 
> upthread. (And more complicated examples aren't that much
> harder.) This check provides the needed guarantee, without the
> need of any test cases or peer review.
 
A static unit test would have to be written for EVERY *use* of the
function in question. If this function is open source it could be used
in hundreds of different projects/programs.
 
Sorry but your arguments simply don't wash; I stand by my assertion.
 
/Flibble
Manfred <noname@invalid.add>: Jun 20 09:02PM +0200

On 6/20/2017 7:27 PM, Tim Rentsch wrote:
>>> there is likelihood of lots of iterations.
 
>> Certainly not for a language like C or C++ where there is no guarantee
>> that the compiler does anything like tail recursion. [...]
I am with Leigh here, what he wrote is just true.
 
> And as often happens in "scientific revolutions", many old-timers
> won't ever change their minds but just become less and less
> relevant as they slowly die off.
It looks like you are a bit confused about what compiler optimization is.
Compiler optimization /can/ (no guarantee) produce faster code and/or
use less resources. The only guarantee (because this is dictated by the
standard) is that optimized code will produce the same results "as if"
no optimization were in place. In other words, optimizations /must not/
alter the result of the program, and they /can/ make the program
faster/smaller.
In particular, there is absolutely no guarantee that optimizations will
correct broken code.
There is a substantial difference between the examples you cited, and
the case at hand:
In your examples about "old-timers" and young revolutionaries the matter
was whether to optimize code by hand or not, but nonetheless both
choices had the precondition to be correct code.
In the case at hand, blind use of tail recursion can result in stack
overflow, which is just broken code.
 
Obviously this does not mean that recursion is bad. It is in fact a very
useful technique, but one must know how to use it.
Actually I /do/ use recursion and tail recursion, but, when I do, I
ensure that stack overflow cannot occur - typically this means ensuring
a proper limit in stack usage and recursion depth.
 
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 20 08:06PM +0100

On Tue, 20 Jun 2017 15:03:15 +0000 (UTC)
 
> By definition, in that case it is no longer tail recursive. Perhaps
> that is what you are trying to say, but you are conflating
> terminology in a confused way.
 
That was not me, that was Jerry. I had made the point that if a
destructor remains to be executed then the call is not in tail
position, which is the same point you are making - see my original
posting about the effect of non-trivial destructors of objects in
function scope. (The obfuscation to which you refer was part of
Jerry's responses. I understand that it is easy to miss the
development of the thread, particularly if, because of your plonking,
you can only see one side of the "conversation".)
 
> >concluding that you express yourself poorly on occasions, of which
> >the posting to which I have referred is one.
 
> My sincere recommendation is to just plonk the guy and ignore him.
 
I have found it quite amusing. Admittedly it is probably less amusing
for other readers of the newsgroup which is why I decided not to see
how far Jerry can be taken (quite a long way would be my guess): this
newsgroup is for C++ rather than exploration of his psyche.
 
What is surprising is that Jerry apparently uses his real name, but
still doesn't mind presenting himself to the world as a weirdo. I
suppose you have to give him credit for that in some odd sense.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 20 08:32PM +0100

On Tue, 20 Jun 2017 19:54:48 +0100
> On 20/06/2017 15:28, Tim Rentsch wrote:
[snip]
> (e.g. whole program optimizations and link time optimizations).
> Also, compiler settings may be changed for a variety of different
> reasons which can cause the code to become incorrect.
 
You cannot ignore optimizations in deciding how to write your code. It
would be unreasonable to write code which does not rely on return value
optimization if the outcome is that you construct heap objects to be
returned where constructing local objects with RVO would be more
efficient. RVO is required by C++17 but you should be coding for it
with any compiler produced in the last 10 years.
cross@spitfire.i.gajendra.net (Dan Cross): Jun 20 08:02PM

In article <20170620200632.630f796e@bother.homenet>,
>Jerry's responses. I understand that it is easy to miss the
>development of the thread, particularly if, because of your plonking,
>you can only see one side of the "conversation".)
 
Ah yes, I know: I was trying to indirectly respond to Stuckle's comment
by quoting "through" your post (if you'll excuse the analogy). Hence my
comments about having plonk'ed him but having to respond to that bit of
his note.... Apologies if that was not sufficiently clear.
 
To be explicit: You are correct, he is wrong, but he is even more wrong
than he realizes because he is using terminology incorrectly.
 
> [snip]
 
- Dan C.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 20 09:48PM +0100

On 20/06/2017 20:32, Chris Vine wrote:
> returned where constructing local objects with RVO would be more
> efficient. RVO is required by C++17 but you should be coding for it
> with any compiler produced in the last 10 years.
 
Bad analogy. RVO not happening doesn't result in a stack fault
(incorrect code). Read up on the "as-if" rule and observable behaviour.
 
/Flibble
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 20 10:06PM +0100

On Tue, 20 Jun 2017 21:48:14 +0100
 
> Bad analogy. RVO not happening doesn't result in a stack fault
> (incorrect code). Read up on the "as-if" rule and observable
> behaviour.
 
A failure of RVO could result in stack failure because more stack
objects are being created than would otherwise be the case. Admittedly
this is not as likely as stack failure from tail elimination not being
carried out, but you seem to be standing on principle.
 
Your suggestion that I do not know about the as-if rule and observable
behaviour, as a kind of put down, is rather Stuckle-like. You might
like to consider whether to avoid it. It is also misplaced here,
because RVO is the only case in C++98/11, and one of the only two cases
in C++14, where the as-if rule does not apply: side effect elimination
by virtue of the RVO is permissible in C++98/11/14.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 20 10:25PM +0100

On 20/06/2017 22:06, Chris Vine wrote:
> objects are being created than would otherwise be the case. Admittedly
> this is not as likely as stack failure from tail elimination not being
> carried out, but you seem to be standing on principle.
 
Relying on RVO when RVO not happening would cause a stack fault is again
incorrect code so I see no difference.
 
/Flibble
vasilievcodercpp@gmail.com: Jun 20 12:23AM -0700

A vulnerability in terms of computer security, is a flaw in the system allowing someone to violate the integrity, or deliberately cause a malfunction, of the program. Practice shows that even a seemingly insignificant bug can be a serious vulnerability. Vulnerabilities can be avoided by using different methods of validation and verification of software, including static analysis. This article will cover the topic of how PVS-Studio copes with the task of vulnerability search.
 
Full article: https://www.viva64.com/en/b/0514/
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: