Thursday, April 19, 2018

Digest for comp.lang.c++@googlegroups.com - 25 updates in 7 topics

Lynn McGuire <lynnmcguire5@gmail.com>: Apr 18 07:58PM -0500

"The C++ committee has taken off its ball and chain"

http://shape-of-code.coding-guidelines.com/2018/04/14/the-c-committee-has-taken-off-its-ball-and-chain/
 
"The Committee should be willing to consider the design / quality of
proposals even if they may cause a change in behavior or failure to
compile for existing code."
 
Interesting only if C++ can generate faster code.
 
Hat tip to:

https://www.codeproject.com/script/Mailouts/View.aspx?mlid=13562&_z=1988477
 
Lynn
Tim Rentsch <txr@alumni.caltech.edu>: Apr 18 07:08PM -0700


> "The Committee should be willing to consider the design / quality of
> proposals even if they may cause a change in behavior or failure to
> compile for existing code."
 
The only thing I find suprising is that it has taken them so
long to say this out loud.
 
Thank you for posting the link.
Jorgen Grahn <grahn+nntp@snipabacken.se>: Apr 19 09:03AM

On Thu, 2018-04-19, Lynn McGuire wrote:
 
> "The Committee should be willing to consider the design / quality of
> proposals even if they may cause a change in behavior or failure to
> compile for existing code."
 
There's not a lot more detail in the article, though; mostly
historical background, and hints about breaking more C compatibility.
Any more thoughts about what this would mean in practice?
 
> Interesting only if C++ can generate faster code.
 
Why? Fast code is important, but it's not the only characteristic of
a language.
 
Personally I like conservative languages, like C++. "Exciting new
features" don't excite me -- or I'd be cargo-culting the
JVM-language-of-the-week instead.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Tim Rentsch <txr@alumni.caltech.edu>: Apr 19 02:57AM -0700

>> proposals even if they may cause a change in behavior or failure to
>> compile for existing code."
 
> Any more thoughts about what this would mean in practice?
 
I'd like to think it should be taken at face value: The
committee should be willing to /consider/ such proposals,
ie, and not just reject them out of hand. The bar for
/accepting/ said proposals could, and ideally would, be
quite a bit higher, but it would be good to at least look
at them.
Paavo Helde <myfirstname@osa.pri.ee>: Apr 19 02:16PM +0300

On 19.04.2018 12:57, Tim Rentsch wrote:
 
>> Any more thoughts about what this would mean in practice?
 
> I'd like to think it should be taken at face value: The
> committee should be willing to /consider/ such proposals,
 
They have done this before, this is nothing new. "auto" and ">>" are the
first examples of considered *and* accepted changes which pop to mind.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 19 06:39AM -0700


> They have done this before, this is nothing new. "auto" and ">>" are
> the first examples of considered *and* accepted changes which pop to
> mind.
 
Like I said, now they're being open about it.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 18 11:29PM -0700

> for dictators has not, and undoubtedly never will be eternal, bland,
> and judicious. Why is linguistic so debauched to mastication? The
> response to this question is that precise is pallid.
 
We have a budding William Faulkner in our midst.
 
If you didn't understand what I said you could have just
asked. I wasn't trying to be obscure.
 
>> not refer to the kind of polymorphism involved but to how it is
>> implemented.
 
> No, it refers to how it /can/ always be implemented.
 
Not so. The kind of overloaded functions that C++ has can always
be resolved statically, but that is not always the case in other
languages. Postscript, for example, has overloaded operators in
the same sense of the word overloading (ie, they depend on the
types of the arguments), but the resolution must be dynamic
because Postscript has dynamic typing. Postscript's overloaded
operators are still ad hoc polymorphism, but the resolution is
dynamic rather than static.
 
> That is a constraint that completely defines what the term refers
> to; it's a self-defining term.
 
I have to disagree with this statement. The word "static" does
not describe what kind of polymorphism is present but something
about how the construct is implemented. That could mean one
thing in C++, and something else in another language. Contrast
this with the terms ad hoc polymorphism, parametric polymorphism,
or subtype polymorphism. In each case we know what kind of
polymorphism is being referred to, without having to know about
how it is implemented. For example, template specialization is a
kind of ad hoc polymorphism. The term "static polymorphism" is
both too specific (along one axis) and not specific enough (along
a different axis). Probably most important, it means different
things in different programming languages.
 
> As opposed to the ad hoc "ad hoc" academic nonsense.
 
It's actually quite descriptive for people who know what "ad hoc"
means. If someone doesn't know they can look it up - it is used
in its regular dictionary sense of the phrase. The key point
though is that "ad hoc polymorphism" is a well-established term,
introduced at the same time that "polymorphism" itself was.
Rather than use established terminology, the C++ community
has adopted terminology that is less descriptive and also
more C++ centric.
 
> Norwegian "amanuensis" (not sure what that's in English, assistant
> professor?). And I am not affronted by my own use of the term,
> it's just realistic, to the point.
 
So, because you spent some time working as an academic lecturer,
it's okay for you to use "academic" to slur the term? That is
in effect what you are suggesting.
 
 
>>> That doesn't support your explanation. ;-)
 
>> I think the above exchange illustrates the point rather well.
 
> You do?
 
Yes. I am careful with language. My background in Computer
Science is both broad and deep. The term "ad hoc polymorphism"
has an excellent pedigree, going back to a seminal work by
Christopher Strachey. It's an apt phrase for this situation.
I invite anyone who is interested to compare my comments and
yours, and draw their own conclusions.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 19 02:05AM -0700

>> term in use with the desired meaning.
 
> The word "dynamic" is well established in C++. We have had dynamic_cast
> since 1998 for example in exactly the same context.
 
I am giving just brief responses, collected here all in a bunch.
 
Modules are like categories in that they have abstract types and
functions, analogous to "objects" and "arrows" for categories.
Functors map modules to modules, or categories to categories. If
you want to say that's a stretch or a loose analogy, okay, I'll
go along with that. But a C++ object is nothing like a category:
no analog to category "objects", and hence no analog to category
"arrows". Furthermore an important property of category functors
is that they are structure preserving (which is similarly true of
functors on modules). A C++ object has no structure to preserve.
 
Yes, "concept" is already in the latest draft, I knew that. That
doesn't make it an apt term. Whatever it is (behavior set? trait
set? blueprint? floorplan? prototype? archetype?) that is defined
in a C++ program as a "concept", it certainly is not an /idea/,
or /thought/, or /notion/. The usage just doesn't match how
"concept" is defined.
 
Re: "polymorphism" and "subtype polymorphism". The term "subtype
polymorphism" is relatively new. By contrast, "polymorphism", "ad
hoc polymorphism", and "parametric polymorphism" are more than 50
years old, introduced in an important work by Christopher Strachey
(which also introduced the terms L value, R value, first class,
manifest (ie, at compile time), among others). During those first
roughly 20 to 30 years, the unadorned "polymorphism" was used for
things that didn't fall into the other kinds he identified. And
that usage is still common today in the larger community. The
work on subtyping didn't take off until the 1980s, especially the
talk by Barbara Liskov in 1987.
 
If you want to say templates provide something like parametric
polymorphism, I'll go along with that. (As a side note, template
specialization is a form of ad hoc polymorphism.)
 
Re: "dynamic". Saying the word "dynamic" was well established
isn't really a good reason to invent an unneeded term by combining
it with "polymorphism". There was already a perfectly good term,
"subtype polymorphism", which also does a better job of describing
the kind of polymorphism involved, rather than focus on how it is
implemented, which seems less important.
 
> futile because arguing about semantics quickly becomes pointless. In
> summary, your starting argument was that C++ chooses its terms more
> poorly than other programming languages. I disagree.
 
In point of fact my starting point was "The C++ community has a
history of using non-standard terminology or using established
terminology in different ways." I believe a historical examination
would bear that out. In some cases and in some ways the different
terms are just fine; in others, not so good (both IMO, and I
understand other people have other views). But I believe my
original statement is objectively true, even if it isn't one that
is easy to verify.
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Apr 19 11:38AM +0200

On 19.04.2018 08:29, Tim Rentsch wrote:
 
>>> I think the above exchange illustrates the point rather well.
 
>> You do?
 
> Yes. I am careful with language.
 
Then something else must be at play.
 
 
Cheers!,
 
- Alf
"Öö Tiib" <ootiib@hot.ee>: Apr 19 06:32AM -0700

On Thursday, 19 April 2018 09:29:24 UTC+3, Tim Rentsch wrote:
> both too specific (along one axis) and not specific enough (along
> a different axis). Probably most important, it means different
> things in different programming languages.
 
I think that example illustrates well what is the core issue of your
disagreement. I see that both of you talk correctly about
somewhat overlapping terms.
 
Note now that also "template specialization" is term usually used
in specific to C++ manner. When we talk about programming in
general then we should say something like "specialization of
generic type". Some of that is resolved in several other
programming languages (that have generics or templates but
also have dynamic types) dynamically.
 
In general programming we can say if it is "ad hoc
polymorphism", "parametric polymorphism" or "subtype
polymorphism". We can't say about general programming
if (and to what extent) these kinds of polymorphism are
static or dynamic, but that also leaves reasons behind
those choices unclear. Feels like difference in semantic
constraints, look and feel, basically matter of taste.
 
Life has however forced C++ mindset to care about concrete
effects of such things and to care lot less about semantic
differences. We use C++ when we need to squeeze
out maximum of run-time efficiency and C++ allows us to do
that by choosing between different polymorphisms carefully.
From that viewpoint it is most important to what extent
things are doable "compile-time" and what has left for
"run-time". Because of that local focus to actual reality we
talk about "static polymorphism" and "dynamic polymorphism"
in C++.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 18 08:41PM -0700


> In general, learning newer C++ features means one has to spend more
> time finding 'bugs' than solving realistic problems (If one studies
> or pratices C++ deeply enough)
 
This answer does have the nice property that it is guaranteed to
have the same semantics as the original function. None of the
other methods suggested do.
 
What I think is probably more relevant is the expected context of
application. I suspect this function is meant to be used only in
a rather narrow context of application. If that is so, I think
it's worth a look at the context to see if it can be jiggered
around so the functionality desired can be provided more easily
or more cleanly. For example, if this is being called in a loop,
always 8 bytes at a time starting from the beginning of an array,
you might be able to change the buffer to one having uint64_t
elements (but filled using char *'s, which is allowed). Then you
can use the uint64_t elements directly (assuming I understand
what it is you want to do).
Tim Rentsch <txr@alumni.caltech.edu>: Apr 19 02:14AM -0700


> The correct version will be just as fast as your incorrect version, if
> compiled with -O or higher and your version actually works.
 
> Your view is deranged.
 
Are you the same Chris Vine who posted message quoted below?
Or am I confused?
 
# Unfortunately there are a number of people who seem equally addicted
# to replying to ****'s posts, thus giving him a further chance to spam
# this newsgroup. Replying to on topic posts seems just as bad from
# this point of view as responding to his off topic posts.
bartc <bc@freeuk.com>: Apr 19 10:43AM +0100

On 18/04/2018 21:02, Rick C. Hodgin wrote:
 
> which does not correlate something like a cast pointer copying 8
> bytes, compared to a memcpy() which copies 8 bytes, is the insane
> component of that discussion.
 
[Haven't read the rest of the thread.]
 
I would hope that the assignment and memcpy do different things here:
 
int64 a;
float64 x = 1.0;
 
a = x;
memcpy(&a, &x, 8);
 
--
bartc
Jorgen Grahn <grahn+nntp@snipabacken.se>: Apr 19 10:08AM

On Tue, 2018-04-17, David Brown wrote:
>> (!(data[6]&'\x80'))&&
>> (!(data[7]&'\x80'));
>> };
 
This one might be faster if you avoid &&, with its short-circuiting
properties. Maybe something like:
 
bool is_valid(const char* data)
{
auto valid = [data](unsigned n) { ... };
return valid(0) + valid(1) + ... valid(7) == 8;
};
 
 
> Endian issues are not going to be a problem. C and C++ allow for a lot
> of flexibility in the representation of integer types, but uint64_t (and
> similar types) are far stricter.
 
But it would have been an issue if the pattern wasn't "same value in
all bytes".
 
> But alignment /will/ be a problem on some platforms.
 
Yes.
 
Personally I recommend staying away from reinterpret_cast (and of
course the old-style C cast). It's too easy to code yourself into
endianness, alignment and aliasing problems.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Apr 19 03:10AM -0700

On Thursday, April 19, 2018 at 5:43:30 AM UTC-4, bartc wrote:
> float64 x = 1.0;
 
> a = x;
> memcpy(&a, &x, 8);
 
The example in this thread is more like:
 
a = *(uint64_t*)&x;
 
In that case you do want the underlying bits exactly as they are.
 
--
Rick C. Hodgin
Jorgen Grahn <grahn+nntp@snipabacken.se>: Apr 19 10:23AM

On Wed, 2018-04-18, wyniijj@gmail.com wrote:
...
 
> g++ test.cpp -O2
> Case1 (all valid) : avg=1.8934 faster
> Case2 (8 invalid, 1 invalid): avg=1.0723 faster
...
 
> [Conclusion] memcpy implement should compile with -O2 to gain
> advantage.
 
It is pointless to measure the performance of code built with
optimization disabled. The advice you got from people probably
assumed -O2 or better optimization.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
wyniijj@gmail.com: Apr 19 04:40AM -0700

Tim Rentsch於 2018年4月19日星期四 UTC+8上午11時42分01秒寫道:
 
> This answer does have the nice property that it is guaranteed to
> have the same semantics as the original function. None of the
> other methods suggested do.
 
Yes, it is. But I want it faster, so is_valid2(..) is questioned.
"Rick C. Hodgin" <rick.c.hodgin@gmail.com>: Apr 19 09:24AM -0400

> ...I want it faster, so is_valid2(..) is questioned.
 
From what I understand, you need to use this form of the function
to be fully C++ compliant without any undefined behavior:
 
bool is_valid2(const char* data)
{
uint64_t d;
memcpy(&d, data, 8);
return !(d & 0x8080808080808080L);
}
 
The use of memcpy() ensures the value in d is valid in C++.
 
--
Rick C. Hodgin
Tim Rentsch <txr@alumni.caltech.edu>: Apr 19 06:32AM -0700

>> have the same semantics as the original function. None of the
>> other methods suggested do.
 
> Yes, it is. But I want it faster, so is_valid2(..) is questioned.
 
Like I said, looking at the call sites with an eye towards
making external adjustments seems like a good way to get
what you want in this case.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 19 06:24AM -0700


> Yes. Also, "crossing too many levels of detail" while probably being
> bad grammar, gives a picture which fits very well with how I think
> about it.
 
What about the phrase "crossing too many levels of detail" leads
you to think it might be bad grammar?
Tim Rentsch <txr@alumni.caltech.edu>: Apr 18 09:30PM -0700

>> environment, 24% of functions are five lines or fewer.
 
> It certainly does appear as though minimising function length is some
> sort of goal,
 
Second trolling trick: quoting selectively to give a false
impression.
 
> and you seem pleased that a quarter of your functions
> are not over five lines.
 
That's your imagination, and willful misunderstanding. Quite
obviously I would never brag about the larger numbers that
you deliberately left out. My concern is not to insist on
short functions but to avoid long functions. But then you
knew that all along.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 19 02:16AM -0700

Stop replying to trolls.
gazelle@shell.xmission.com (Kenny McCormack): Apr 19 10:54AM

In article <kfnwox35zf2.fsf@x-alumni2.alumni.caltech.edu>,
>Stop replying to trolls.
 
OK. This will be my last one.
 
--
Christianity is not a religion.
 
- Rick C Hodgin -
Tim Rentsch <txr@alumni.caltech.edu>: Apr 18 09:11PM -0700


> Please stop answering to <redacted>, he's crazy, he must be locked in a
> psychiatric hospitaL, just add him to your kill file or do not answer
> ANY, I repeat, ANY post from him.
 
I second this motion.
Tim Rentsch <txr@alumni.caltech.edu>: Apr 18 09:09PM -0700


> My time with FORTRAN dates back a very long time ago (fortran 77 was
> the one I learned), so I'm a bit rusty, but I'm positive I was taught
> it was call by reference.
 
Yes, I expect that's right for fortran 77, and probably later
fortrans also. (It wasn't my intention to contradict you,
just to add to the conversation.)
 
I remember being taught (this was before 1977) that Fortran uses
call by value-result. I don't have as much confidence in my memory
being reliable as you seem to have in yours, hence "pretty sure". A
web search turned up these pages:
 
https://www.d.umn.edu/~rmaclin/cs5641/fall2004/Notes/Lecture13.pdf
http://pages.cs.wisc.edu/~fischer/cs536.s08/course.hold/html/NOTES/9.PARAMETER-PASSING.html
 
which say that some Fortrans used value-result (I think Fortran IV
is mentioned specifically on one of them). And of course arrays
are different (I didn't bother even to check what was said about
arrays).
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: