Thursday, June 22, 2017

Digest for comp.lang.c++@googlegroups.com - 19 updates in 3 topics

Tim Rentsch <txr@alumni.caltech.edu>: Jun 22 06:52AM -0700


> In effect my implementation of Timer had baked in far more behaviour
> than my mental model of the class. The CancellableTimer was (in the
> Liskov sense) a Timer-as-imagined, but not a Timer-as-implemented.
 
Right, I understand (and had more-or-less inferred as much from
your earlier comments).
 
There are two things going on here, both of which I think are
noteworthy.
 
One is that how destructors work is wired in at a deep level in
C++, in such a way that the usual semantics is nearly impossible
to override. That together with the member variable being a
reference created an insoluble problem. I'm surprised that C++
doesn't provide a means to choose alternate semantics for how
destructors work when a subclass might want to do that. I played
around with various schemes to work around that; some were nicer
than others but I didn't find any that strike me as completely
satisfactory.
 
The second thing is that C++ doesn't really provide a way to
define what you want here. Or maybe a better way to say this
is that C++ does provide ways to define things like what you
want, but not what you want precisely. There are two kinds of
compatibility that C++ accommodates: interface compatibility,
and implementation compatibility. The problem is that what you
want falls somewhere in between those two endpoints.
 
Another way of expressing the second point is not being able to
distinguish types and classes. It's been known for more than 30
years that types and classes are not synonymous, but that
understanding hasn't yet percolated out to the community at
large. Unfortunately C++ is firmly wedded to the concept of
seeing classes as types (or sometimes the other way around)
rather than supporting the two as distinct notions. Given how
C++ has evolved it would be very difficult to change that now,
but I think it's worth starting the conversation, so that at
least the understanding will be there even if it takes the
language a while to catch up.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 22 05:25PM +0100

On 22/06/2017 14:52, Tim Rentsch wrote:
> but I think it's worth starting the conversation, so that at
> least the understanding will be there even if it takes the
> language a while to catch up.
 
What absolutely nonsense! Of course classes are types. An instance of a
type is an object, thus:
 
int a = 42; // a is the name of an OBJECT of scalar TYPE 'int'.
foo o = 42; // o is the name of an OBJECT of class TYPE 'foo'.
 
/Flibble
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 22 06:55PM +0200

On 22-Jun-17 6:25 PM, Mr Flibble wrote:
> type is an object, thus:
 
> int a = 42; // a is the name of an OBJECT of scalar TYPE 'int'.
> foo o = 42; // o is the name of an OBJECT of class TYPE 'foo'.
 
I think Tim means that there are other things that also are types, so
that "class" is a much more specialized, less general term than "type".
 
For example, even in what C++ supports, as you show, we have `enum`
types, and built-in types, and pointer and reference types, none of
which are classes.
 
But I suspect that Tim has now read at least one or two of the articles
by Barbara Liskov that he cited but didn't then read (and that it turned
out that I could prove that I cited and had read some five years ago,
though, unprovable, I think I first read them in the mid 1990's), and
perhaps due to my remarks about his use of the term "subtype", has now
gained a better appreciation of the shortcomings of the C++ type system.
 
For example, there is the relationship between mathematical "integer"
and "rational", where every integer value is-a rational value, but where
there are more rationals... One way to address that in modern C++ is via
implicit conversion, instead of class inheritance. And we do that as a
matter of course for smart pointers, but I'm not entirely sure of the
connection, how this really fits in an academic view of things.
 
However, I think it's possible that what Tim writes here is not
/relevant/ to the original issue of the OP.
 
 
Cheers!,
 
- Alf
me <crisdunbar@gmail.com>: Jun 22 10:07AM -0700

On Thursday, June 22, 2017 at 9:56:02 AM UTC-7, Alf P. Steinbach wrote:
<big snip>
 
> For example, there is the relationship between mathematical "integer"
> and "rational", where every integer value is-a rational value, but where
> there are more rationals...
<more snip>
 
Well, no (to the last clause): there are exactly as many rationals as there are integers.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 22 07:23PM +0200

On 22-Jun-17 7:07 PM, me wrote:
>> there are more rationals...
> <more snip>
 
> Well, no (to the last clause): there are exactly as many rationals as there are integers.
 
Just add "in any given range".
 
Comparing infinities is a not of much practical advantage in programming. :)
 
What's at issue here is that a class derivation such as
 
struct Rational{ int a; int b; }; // Represents a/b.
struct Integer: Rational {}; // Class invariant: b == 1 always
 
where `Integer` is a /restriction/ on values, is not a practical
proposition.
 
It can however be implemented as
 
struct Integer { int value; };
 
struct Rational
{
Integer a;
Integer b;
 
auto operator Integer() const
{ if( b != 1 ) throw "Moo hah"; return a; }
 
// More ops, in particular arithmetic, plus:
 
Rational( Integer const value )
: a{ value }
, b{ 1 }
{}
};
 
And one in my view important feature of the latter design is that there
is no knowledge of `Rational` in the `Integer` class. One can devise an
unbounded number of classes, /generalizations/, like `Rational`. It
would not be practical to directly add foreknowledge of them all in
`Integer`, although perhaps some indirection could be applied, IDK.
 
In practice I've mostly just met this problem with `enum` generalizations.
 
E.g., ordinary playing card, versus the type that also admits joker cards.
 
 
Cheers!,
 
- Alf
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 22 07:18PM +0100

On 22/06/2017 17:55, Alf P. Steinbach wrote:
>> foo o = 42; // o is the name of an OBJECT of class TYPE 'foo'.
 
> I think Tim means that there are other things that also are types, so
> that "class" is a much more specialized, less general term than "type".
 
So what if other things are also types? Doesn't change the fact that
classes are types.
 
Perhaps Tim's issue is the way the language handles different categories
of types in different ways? Even if we view this as a shortcoming in the
language (I don't) then again it doesn't change the fact that classes
are types.
 
If you want to use a useless programming language like Haskell so you
can feel all warm and fuzzy with your "types" that fulfil some strict
ivory tower definition of what a "type" is then use it and fuck off from
this newsgroup with your complaining and nonsense noise.
 
/Flibble
me <crisdunbar@gmail.com>: Jun 22 12:43PM -0700

On Thursday, June 22, 2017 at 10:24:02 AM UTC-7, Alf P. Steinbach wrote:
> > <more snip>
 
> > Well, no (to the last clause): there are exactly as many rationals as there are integers.
 
> Just add "in any given range".
 
<snip>
 
Not if the "given range" is (0, inf) =)
 
Sorry, couldn't resist. Pedantry is /de riguer/ in clc.
 
<Looks up at message header>
 
Ooops, never mind, I'll get my coat.
Ian Collins <ian-news@hotmail.com>: Jun 23 08:38AM +1200

On 06/23/17 04:55 AM, Alf P. Steinbach wrote:
 
> For example, even in what C++ supports, as you show, we have `enum`
> types, and built-in types, and pointer and reference types, none of
> which are classes.
 
The language its self acknowledged class being a specialised type when
"template <typename T>" replaced "template <class T>" as the standard
form. C++17 goes one step further with template template parameters.
 
--
Ian
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 22 09:50PM +0100

On Thu, 22 Jun 2017 12:43:11 -0700 (PDT)
 
> Sorry, couldn't resist. Pedantry is /de riguer/ in clc.
 
> <Looks up at message header>
 
> Ooops, never mind, I'll get my coat.
 
No. Georg Cantor proved a long time ago that there are different sizes
of infinity, and inequalities of infinity, and used integers and reals
in the proof (OK, reals not rationals, but the same applies).
Gareth Owen <gwowen@gmail.com>: Jun 22 10:13PM +0100


> No. Georg Cantor proved a long time ago that there are different sizes
> of infinity, and inequalities of infinity, and used integers and reals
> in the proof (OK, reals not rationals, but the same applies).
 
There are different sizes of infinities, but the set of integers and and
the set of rationals have the same infinite cardinality
(the smallest - they are both countably infinite).
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 22 10:26PM +0100

On Thu, 22 Jun 2017 22:13:30 +0100
 
> There are different sizes of infinities, but the set of integers and
> and the set of rationals have the same infinite cardinality
> (the smallest - they are both countably infinite).
 
Are you agreeing with me, disagreeing with me, or adding verisimilitude
and corroborative detail?
 
As I understand it the cardinality of reals is at least as large as
that of naturals (integers), the cardinality of naturals being the
minimum infinite cardinality.
 
But I don't mind being wrong. I am reminded that engineers defer only
to physicists, physicists defer only to mathematicians, and
mathematicians defer only to God.
me <crisdunbar@gmail.com>: Jun 22 02:47PM -0700

On Thursday, June 22, 2017 at 2:27:01 PM UTC-7, Chris Vine wrote:
 
> But I don't mind being wrong. I am reminded that engineers defer only
> to physicists, physicists defer only to mathematicians, and
> mathematicians defer only to God.
 
You're correct in that the cardinality of the reals is greater than that
of the naturals (which are of the smallest infinite cardinality), but,
perhaps curiously, sets like the rationals and algebraics are also
of the same cardinality as the naturals, even though they are "dense"
like the reals.
 
So there are exactly the same number of rationals as of integers or of
naturals, but there are _more_ reals.
Chris Vine <chris@cvine--nospam--.freeserve.co.uk>: Jun 22 11:07PM +0100

On Thu, 22 Jun 2017 14:47:34 -0700 (PDT)
> though they are "dense" like the reals.
 
> So there are exactly the same number of rationals as of integers or of
> naturals, but there are _more_ reals.
 
Thank you, that is a curiosity. This presumably arises from the fact
that rationals are expressible by two integers (as a fraction), two
being a finite number, but it is still somewhat counter intuitive.
Tim Rentsch <txr@alumni.caltech.edu>: Jun 22 12:07PM -0700

> In your examples about "old-timers" and young revolutionaries the
> matter was whether to optimize code by hand or not, but nonetheless
> both choices had the precondition to be correct code.
 
Let's explore these comments a bit. We would like to compare an
iterative list reversal function and a recursive list reversal
function. To simplify the discussion, let's stipulate that the two
entry points have the same interface (which is easy to arrange),
that function bodies use only parameter variables and local
variables (ie, such as would be put in registers or in the local
stack frame), and that they are each independently compiled in a
separate translation unit (ie, so at least one stack frame is
needed in either case, rather than functions being expanded
inline). What can we say about what behavior we might observe?
 
First, what happens if they are invoked with enough stack space
(without saying how much stack space is needed)?
 
Iterative: produces correct result
Recursive: produces correct result
 
Second, what happens if they are invoked with not enough stack
space (again without saying how much stack space is needed)?
 
Iterative: fails
Recursive: fails
 
Now we would like to inquire what range of values the language
tolerates. Does the language specification allow each case
to be compiled to take only one stack frame?
 
Iterative: yes
Recursive: yes
 
So how about the other end? Does the language specification
allow each case to be compiled to take O(n) stack frames?
 
Iterative: yes
Recursive: yes
 
I hope these results are not surprising. As far as the abstract
machine is concerned, the semantics in the two cases are
identical. Under the rules laid out in the language standard,
they are therefore completely interchangeable. In fact, a
perverse implementation could compile an iterative version so it
would take O(n) stack space, and a recursive version so it would
take O(1) stack space. As far as correctness goes, the two
approaches have identical semantics, so either both are correct
or neither is. The issue here is not one of correctness in the
language but a practical consideration related to quality of
implementation. Do you see what I mean when I say that?
 
(Incidentally, in response to your comment about not understanding
what optimization is - my education in formal semantics goes back
more than 40 years, and I have read the relevant sections in both
the C standard and the C++ standard fairly closely, and more than
once each. So I think it's fair to say my understanding of what
optimization is is pretty good in this particular context.)
 
 
> In the case at hand, blind use of tail recursion can result in stack
> overflow, which is just broken code.
 
Sure, but I'm not advocating blind use of tail recursion. In
fact very much the opposite - I am advocating /informed/ use of
tail recursion, including automated static checks to ensure
things don't get out of hand.
 
 
> Actually I /do/ use recursion and tail recursion, but, when I do, I
> ensure that stack overflow cannot occur - typically this means
> ensuring a proper limit in stack usage and recursion depth.
 
When I write simple tail-recursive functions like the one shown
upthread, my preferred method is an automated scan of the
generated assembly to verify there are no recursive calls. It's
simple to use (one extra line in the makefile), and very effective
at weeding out bad cases. More complicated algorithms (eg binary
trees) need more sophisticated analyses, but IME a simple static
scan is very good as a first line of defense - ie, most uses of
recursion can and should be written so they compile into assembly
that does not recurse.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Jun 22 10:00PM +0200

On 22-Jun-17 9:07 PM, Tim Rentsch wrote:
 
> So how about the other end? Does the language specification
> allow each case to be compiled to take O(n) stack frames?
 
> Iterative: yes
 
Nope, that's not a valid concern.
 
As an example, the C++ standard formally allows a 1GB `bool` type, so
that a program could in principle, with some given compiler, guaranteed
crash on the first use of a `bool` as an automatic variable. That the
standard doesn't explicitly forbid it doesn't mean that such 1GB `bool`
can occur in practice: no sane person would use that compiler. Likewise,
that the standard doesn't explicitly forbid leaking memory in each
iteration of a loop, doesn't mean that it can ever occur in practice.
 
 
> Recursive: yes
 
Yes.
 
 
> I hope these results are not surprising.
 
They are, in a "how could he believe /that/" way.
 
 
- Alf
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Jun 22 09:20PM +0100

On 22/06/2017 20:07, Tim Rentsch wrote:
 
> Iterative: yes
> Recursive: yes
 
> I hope these results are not surprising. As far as the abstract
 
Like a complete idiot you have totally missed the point:
 
Iterative: FIXED O(1) STACK USAGE
Recursive: UNBOUNDED O(n) STACK USAGE
 
/Flibble
"Chris M. Thomasson" <invalid@invalid.invalid>: Jun 22 01:21PM -0700

On 6/22/2017 1:20 PM, Mr Flibble wrote:
 
> Like a complete idiot you have totally missed the point:
 
> Iterative: FIXED O(1) STACK USAGE
> Recursive: UNBOUNDED O(n) STACK USAGE
 
Fwiw, I have accidentally blown the stack using recursive formulas
before. The pure iterative form does not do this.
cross@spitfire.i.gajendra.net (Dan Cross): Jun 22 09:06PM

In article <kfn8tkj6dul.fsf@x-alumni2.alumni.caltech.edu>,
>or neither is. The issue here is not one of correctness in the
>language but a practical consideration related to quality of
>implementation. Do you see what I mean when I say that?
 
Whether one finds these resultings surprising or not depends on what
one is supposed to find surprising. In this case, it is not surprising
that the standard permits O(n) behavior with an iterative algorithm.
It is possible.
 
But it is not probable, and the likelyhood that a real (as in serious,
not a proof-of-concept or joke) implementation would do something so
silly is 0. For all intents and purposes a programmer may reasonably
assume that an iterative algorithm for something like list reversal
(such as the version I posted earlier; modulo the one little typo I
put in that no one seems to have pointed out [it is still correct]...)
would run in O(1) space. There is some academic truth to the argument
that this may not be universally guaranteed, but that is, frankly, an
obtuse argument.
 
On the other hand, tail-recurision in inherently unsafe in a language
like C++, where non-trivial side-effects are permitted -- indeed,
encouraged -- in object destructors and objects have block scope. In
languages like Haskell, Scheme, SML and OCaml it is a wonderful
technique; but in C++ a change to an unrelated class can make an
algorithm that had been tail-recursive be no longer so by adding a
hidden destructor call in the recursive invocation, and using such
a function then becomes O(n) in memory usage.
 
>fact very much the opposite - I am advocating /informed/ use of
>tail recursion, including automated static checks to ensure
>things don't get out of hand.
 
I think the issue here is that techniques like tail recursion, where
the depth is not known a priori, are in general inherently unsafe
optimizations to rely on in languages like C++. Because of some of
the fundamentals of the language's design, what looks like a perfectly
valid tail call may not be a tail call. Put another way, I would argue
that C++ is a language that does not *want* to be programmed that way.
 
>scan is very good as a first line of defense - ie, most uses of
>recursion can and should be written so they compile into assembly
>that does not recurse.
 
Static analysis of the compiler output, however, doesn't prevent you
from regressing after a change, though. Unless you want to check after
every change that may affect the tail-recursive function to make sure
that the compiler could figure out that it was still tail recursive,
that is -- and that seems like more effort than it's worth.
 
In this specific case, the iterative algorithm is just as easy to
understand (if not easier for programmers who aren't familiar with
tail recursive function optimization -- let alone the hidden pitfalls
given presented by implicit destructor calls, possibly coupled with
temporary objects being created through operator overloading, etc).
 
There is much to be said for code that is properly tail recursive
and for programming in that style, but this example is not it.
 
- Dan C.
Ian Collins <ian-news@hotmail.com>: Jun 23 08:11AM +1200

On 06/22/17 04:07 AM, Alf P. Steinbach wrote:
 
> auto operator<<( ostream& stream, S const& o )
> -> ostream&
> { return stream << "{" << o.name << ", " << o.score << "}"; }
 
That is so not idiomatic it made my eyes bleed!
 
--
Ian
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: