Saturday, August 31, 2019

Digest for comp.lang.c++@googlegroups.com - 13 updates in 2 topics

woodbrian77@gmail.com: Aug 31 03:55PM -0700

Shalom
 
I have this line
static_assert(::std::numeric_limits<float>::is_iec559,"Only IEEE754 supported");
 
in my library: https://github.com/Ebenezer-group/onwards
 
. I wrote that line when I was using 2011 C++ and the comment
was required. Now I require 2017 C++ and am trying to decide between:
 
a. remove the comment
b. reduce the comment to just "IEEE754", or
c. leave it as it is.
 
Thank you.
 
 
Brian
Ebenezer Enterprises - Enjoying programming again.
http://webEbenezer.net
Anton Shepelev <anton.txt@gmail.com>: Aug 31 03:05AM +0300

Bonita Montero to already5chosen:
 
> > EH is bad not because it's slow or costly, but because
> > of its messy semantics.
 
> What's messy with the semantics?
 
By looking at any piece of contiguous code you can't say
where its execution is going to be suddenly interrupted and
jump to another unobvous place. Code with exceptions is not
local because its execution depends on any try blocks
anywhere up the call stack. Code with exceptions is not
transparent, because it hides the error-handling part of the
algorithm, and incomplete information is akin to a lie.
 
--
() ascii ribbon campaign -- against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
Ian Collins <ian-news@hotmail.com>: Aug 31 12:18PM +1200

On 31/08/2019 12:05, Anton Shepelev wrote:
> anywhere up the call stack. Code with exceptions is not
> transparent, because it hides the error-handling part of the
> algorithm, and incomplete information is akin to a lie.
 
You could equally say that code with exceptions emphasises the error
handling be concentrating it in one place. Often a process with many
steps either works or fails, so handling the errors in one place makes
sense. Throwing exceptions also makes the error handling easier to
spot, you just look for the throws.
 
--
Ian.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 31 11:43AM +0200


> By looking at any piece of contiguous code you can't say
> where its execution is going to be suddenly interrupted
> and jump to another unobvous place.
 
The code that throws an exception doesn't need to know where
the exception is caught.
"Öö Tiib" <ootiib@hot.ee>: Aug 31 04:50AM -0700

On Friday, 30 August 2019 09:13:33 UTC+3, Mel wrote:
 
> That's Rust advantage. Returnig discriminated union is right way for
> handling errors. Rust has syntactic sugar in that. Latest proposal
> for c++ exceptions leans toeard that solution.
 
Perhaps different people think of error handling differently.
I learned programming using BASIC interpreter in eighties and
there was ERROR statement for generating errors and ON ERROR GOTO
statement for trapping those to handler. Logic was to keep the
already messy algorithms expressed in BASIC cleaner by handling
errors in separate place. C and C++ with no error handling in
language and most errors just left to be undefined behavior
felt seriously inferior because of that. When Microsoft added
SEH exceptions to both C and C++ in nineties then it felt good
extension.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Aug 31 01:20PM +0100

On 30/08/2019 12:08, David Brown wrote:
> when there are no errors, than other handling techniques for typical
> code on PC's". What I am /not/ happy with is "Exception handling is
> faster than other error return techniques".
 
That statement just shows us that you are clueless on the subject of
exceptions. Firstly exceptions are not just for errors. Secondly manually
propagating an error code up the stack manually translating to make it
suitable for one abstraction layer to the next is both an antiquated way
of doing things and a fucktarded way of doing things. If you like error
codes so much then stick to fucking C.
 
/Flibble
 
--
"Snakes didn't evolve, instead talking snakes with legs changed into
snakes." - Rick C. Hodgin
 
"You won't burn in hell. But be nice anyway." – Ricky Gervais
 
"I see Atheists are fighting and killing each other again, over who
doesn't believe in any God the most. Oh, no..wait.. that never happens." –
Ricky Gervais
 
"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."
Paavo Helde <myfirstname@osa.pri.ee>: Aug 31 05:17PM +0300

On 31.08.2019 3:05, Anton Shepelev wrote:
 
> By looking at any piece of contiguous code you can't say
> where its execution is going to be suddenly interrupted and
> jump to another unobvous place.
 
In C++ this is a no-brainer, you don't need to know or care where the
code execution can be interrupted. This is because every allocated
resource is protected and released by RAII automatically, regardless of
if there is an exception or not. This means using things like
std::make_unique() instead of new, etc.
 
IOW, in C++ you just code with the assumption that about every line can
throw an exception, if not now, then in the future. There are only some
special places where one needs to care about thrown exceptions, namely
noexcept functions like destructors, swappers and like. With proper RAII
very few classes need such functions.
 
> anywhere up the call stack. Code with exceptions is not
> transparent, because it hides the error-handling part of the
> algorithm, and incomplete information is akin to a lie.
 
This might sound nice in theory, but in reality the error handling part
cannot be part of the actual code because more often than not this code
has no idea how to react to the error. Let's say an input data file
cannot be opened by some number-crunching library. How should it handle
this error? Should it abort the whole program, or should it pop up a
dialog asking the user to correct the problem and retry? What if it runs
in a web server where there is no user to click on the dialog?
 
The error handling logic is altogether at a different level than the
low-level code which tries to open the file, and the low-level code
cannot and should not contain any "handling" of such errors, it should
just report them to the caller. Exceptions are the most convenient way
for doing this, in particular they can easily encapsulate much more
detailed information about the problem than just some cryptic error code.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 31 04:28PM +0200

> for doing this, in particular they can easily encapsulate much more
> detailed information about the problem than just some cryptic error
> code.
 
What I don't like with C++ is that exceptions aren't usually
concatenated as like in Java, which is good for debugginig-purposes.
In Java you throw a highly specific checked exception-type at the
point where the error occurs and the exception might get caught up
the call-stack and concatenated into an more general exceptiont
-type. I.e. a JDBC-driver might throw a I/O-error when the socket
is dropped by the OS, there is a specific socket exception-class
thrown and further up the call -levels this is wrapped in a more
general JDBC-error-class; this is done in this way: "throw new
OuterExceptionClass( inner );".
In C++ this isn't directly possible because new might throw bad
_alloc. The only way I can imagine to do this is to have thread
-local / static exception-objects which are thrown when appropriate.
Manfred <noname@invalid.add>: Aug 31 06:03PM +0200

On 8/30/19 1:47 PM, Bonita Montero wrote:
[...]
>>> -collapse doesn't count.
 
>> Incorrect.
 
> No, correct.
No, incorrect.
 
>> reliability systems.
 
> When you have a resource or I/O-collapse, you don't have reliability
> anyway.
 
Correctness of behavior (which includes controlled performance) under
stress conditions (including resource exhaustion) is a critical
requirement of high reliability systems - that's what they are made for.
It appears you don't have much experience with such systems.
 
>> that can be used, without leading to any inefficiency or bloat.
 
> Almost the whole standard-libary uses an allocaor which might
> throw bad_alloc; not to use the standard-library isn't really C++.
 
Incorrect as well.
 
As a first one can replace the allocator in all standard library
facilities, and more generally one can use many features of C++ (e.g.
templates, generic programming, metaprogramming, constexpr (aka
compile-time processing), ...) with no need for any runtime-hungry
technology.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 31 06:21PM +0200

> stress conditions (including resource exhaustion) is a critical
> requirement of high reliability systems - that's what they are made for.
> It appears you don't have much experience with such systems.
 
The correctnes is also guaranteed with exceptions. But they might have
higher runtime-overhead when being thrown. But this absolutely doesn't
matter because the resouce- or I/O-collapse case isn't performance
-relevant. So I'm not incorrect.
And consider what you would do with return-codes: you would evaluate
them at every call-level and convert them into a special return-code
appropriate for the function-level. This is also likely to be very
inperformant.
 
>> Almost the whole standard-libary uses an allocaor which might
>> throw bad_alloc; not to use the standard-library isn't really C++.
 
> Incorrect as well.
 
N, true.
 
> As a first one can replace the allocator in all standard library
> facilities, ...
 
Doing this without throwing an exception like bad_alloc from an allo-
cator doesn't work. This is because there is no way to signal a memory
-collapse differently than through throwing an exception on the usual
container-operations. Dropping the facilities that might throw excep-
tions isn't really C++
David Brown <david.brown@hesbynett.no>: Aug 31 11:39PM +0200

On 30/08/2019 13:47, Bonita Montero wrote:
>> problem - that is critical.
 
> The case when an exception is thrown isn't performance-relevant
> as this occurs when there's an resource- or I/O-collapse.
 
As seems to be the case depressingly often, you are over-generalising
from certain types of software.
 
For some software, exceptions are ruled out as a technique because they
take too long, and in particular, their timing is very hard to predict.
For real-time programming, it doesn't matter how fast the usual
situation runs. It matters that even in the worst case, you know how
long the code will take. It is a world outside your experience and
understanding, apparently, but it is a vital part of the modern world.
 
 
>> Fair enough - though convenience is also a subjective matter.
 
> That's rather not subjective here because evaluating return-codes on
> every call-level is a lot of work.
 
Again, you don't see the big picture and think that your ideas cover
everything.
 
Dealing with return codes on every call level is only a lot of work if
you have lots of call levels between identifying a problem and dealing
with it. And using exceptions is a /huge/ amount of work for some kinds
of analysis of code and possible code flows. By being explicit and
clearly limited, return code error handling is far simpler to examine
and understand, and it is far easier to test functions fully when you
don't have to take into account the possibility of an unknown number of
unknown exceptions passing through.
 
As I have said, different ways of handling errors have their pros and
cons, and there is no one way that is best for all cases.
 
>> functions compiled without knowledge of the exceptions - such as C
>> functions compiled by a C compiler.
 
> Can you read what I told above?
 
I elaborated on what you wrote.
 
>>> -collapse doesn't count.
 
>> Incorrect.
 
> No, correct.
 
I'm sorry, but you are making bald statements with apparently no
knowledge or experience in the area.
 
>> reliability systems.
 
> When you have a resource or I/O-collapse, you don't have reliability
> anyway.
 
You do understand that people use exceptions for a variety of reasons,
don't you? You do realise that people sometimes write safe and reliable
software that is designed to deal with problems in a reasonable fashion?
If "throw an exception" simply means "there's been a disaster - give
up on reliability, accept that the world has ended and it doesn't matter
how slowly we act as the program is falling apart" then why have an
exception system in the first place? Just call "abort()" and be done
with it.
 
>> that can be used, without leading to any inefficiency or bloat.
 
> Almost the whole standard-libary uses an allocaor which might
> throw bad_alloc; not to use the standard-library isn't really C++.
 
That's bollocks on so many levels, it's not worth trying to explain them
to you.
David Brown <david.brown@hesbynett.no>: Aug 31 11:45PM +0200

On 31/08/2019 14:20, Mr Flibble wrote:
>> faster than other error return techniques".
 
> That statement just shows us that you are clueless on the subject of
> exceptions.
 
You are jumping in to a discussion without following the details.
 
> Firstly exceptions are not just for errors.
 
I realise that. The discussion was about error handling techniques,
including alternatives to exceptions for handling errors.
 
And of course, there is no single definition of "error" anyway - it
means different things in different contexts.
 
> manually propagating an error code up the stack manually translating to
> make it suitable for one abstraction layer to the next is both an
> antiquated way of doing things and a fucktarded way of doing things.
 
It is an excellent way of handling things in some cases. If you had
been paying attention and read my posts before replying, you'd have seen
I have argued that a variety of techniques can be useful in different
cases - exceptions are often a good choice, but they are most certainly
not always the right choice.
 
> If
> you like error codes so much then stick to fucking C.
 
Don't be such a child.
David Brown <david.brown@hesbynett.no>: Sep 01 12:06AM +0200

On 31/08/2019 16:17, Paavo Helde wrote:
 
>> By looking at any piece of contiguous code you can't say
>> where its execution is going to be suddenly interrupted and
>> jump to another unobvous place.
 
C++ exceptions have been described as "undocumented gotos". The
description is, I think, quite fair. That does not necessarily make
them a bad thing - they can often be a good choice in coding.
 
Different types of coding need different balances, and programming is
always a matter of picking appropriate balances. Sometimes you want
automatic and implicit, sometimes you want manual and explicit. C++
offers you much more flexibility than most languages in finding balances
that work well for the code you are writing - you learn the options and
their pros and cons, and make your choices.
 
> resource is protected and released by RAII automatically, regardless of
> if there is an exception or not. This means using things like
> std::make_unique() instead of new, etc.
 
RAII to protect resources is not remotely dependent on exceptions :
 
std::optional<int> foo(int x) {
auto a = std::make_unique<int>(x);
auto b = dosomething(a);
if (!b) return std::nullopt;
dosomethingelse(a, b);
return calc(a);
}
 
You don't need exceptions to avoid leaks and tidy up resources - RAII
gives you that regardless. Returning with error codes (or
std::optional, or new suggestions like "expected", or whatever) is just
as good from that viewpoint.

 
 
> This might sound nice in theory, but in reality the error handling part
> cannot be part of the actual code because more often than not this code
> has no idea how to react to the error.
 
And therein lies the problem, when you are concerned with being able to
be sure your code is correct (both by design, and by testing).
 
> this error? Should it abort the whole program, or should it pop up a
> dialog asking the user to correct the problem and retry? What if it runs
> in a web server where there is no user to click on the dialog?
 
Separate the concerns. Don't make a function (or class, or unit, or
whatever code size is appropriate) that reads data from a file and
crunches it. Make a function that reads data from the file. Make
another function that crunches data.
 
Or make a function that will attempt to read the data from the file and
crunch it, returning the correct result or some sort of error indicator
when a problem is found. That error indicator could be an error code, a
std::optional nullopt, a tuple field in the result - or a thrown exception.
 
But now, instead of having a function that lives in a happy, optimistic,
carefree and unrealistic world, with no sense of responsibility and an
"I don't care, it's someone else's problem" attitude, you've got a
function that is clear and complete. If there is a problem, it will
tell you - and it will do so in a way that you know about and
understand, and that you can test. Your function no longer has errors -
it has a wider scope of correct, specified, tested and documented,
behaviour. (And yes, you can use exceptions for this just as well as
other error techniques.)
 
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Friday, August 30, 2019

Digest for comp.lang.c++@googlegroups.com - 19 updates in 2 topics

Mel <mel@zzzzz.com>: Aug 30 01:37PM +0200

On Fri, 30 Aug 2019 14:08:47 +0300, Paavo Helde
> > On Fri, 30 Aug 2019 10:39:19 +0200, Bonita Montero
> >> The non-throw-case takes almost no or no performance.
 
> > That's no true. You have to install handlers evrry time or
prepare tbla
 
> >> Evaluating return-codes is simply a magnitude slower.
 
> > This is especially not true.
 
 
> Your information is out of date for a couple of decades. The
current C++
> compilers have implemented zero cost exceptions (in the no-throw
path)
> already long time ago.
 
That's simply impossible.
 
--
Press any key to continue or any other to quit
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 01:47PM +0200

> when they /are/ thrown. But "usually" is not "always". And for some
> situations, it is the worst case performance - the speed when there is a
> problem - that is critical.
 
The case when an exception is thrown isn't performance-relevant
as this occurs when there's an resource- or I/O-collapse.
 
> Fair enough - though convenience is also a subjective matter.
 
That's rather not subjective here because evaluating return-codes on
every call-level is a lot of work.
 
 
> some people like and others dislike. And they can't pass through
> functions compiled without knowledge of the exceptions - such as C
> functions compiled by a C compiler.
 
Can you read what I told above?
 
>> Even on small CPUs the performance of processing a resurce- or I/O
>> -collapse doesn't count.
 
> Incorrect.
 
No, correct.
 
> Performance under the worst case is often a critical feature of high
> reliability systems.
 
When you have a resource or I/O-collapse, you don't have reliability
anyway.
 
 
>> That's true, but then you wouldn't use C++.
 
> Nonsense. There are many good reasons to use C++, and many features
> that can be used, without leading to any inefficiency or bloat.
 
Almost the whole standard-libary uses an allocaor which might
throw bad_alloc; not to use the standard-library isn't really C++.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 01:48PM +0200

> That's simply impossible.
 
If I wouldn't know how it is possible I would think the same.
Paavo Helde <myfirstname@osa.pri.ee>: Aug 30 03:18PM +0300

On 30.08.2019 14:37, Mel wrote:
> path)
>> already long time ago.
 
> That's simply impossible.
 
See e.g. http://www.ut.sco.com/developers/products/ehopt.pdf
 
From that paper:
 
"Most high-performance EH implementations use the commonly known
table-driven approach. The goal of this approach is to present zero
execution-time overhead, until and unless an exception is thrown."
 
The paper acknowledges some deficiencies with this goal, mainly missing
some optimization opportunities, then goes further to solve a part of them.
 
Note this paper is from 1998. The compiler technology has moved forward
meanwhile.
 
It might be there is some minimal machine code needed for EH like a
register load at function entrance. This does not make it non-zero cost
because the alternative is to use an error return, which would also be
at least a register load plus a branching instruction for checking it in
the caller.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 02:24PM +0200

> Note this paper is from 1998. The compiler technology
> has moved forward meanwhile.
 
That this is from 1998 doesn't mean that it is relevant to each imple-
mentation. Win32 had structured exception handling from the beginning
with NT 3.1. And the exception-handling-implementation of the first
Visual C++ version was built on top of that. So Microsoft hadn't any
choice to revert that for newer version of VC++.
"Öö Tiib" <ootiib@hot.ee>: Aug 30 06:30AM -0700

On Friday, 30 August 2019 11:33:36 UTC+3, Mel wrote:
> > }
 
> > That does not seem to fit with what you describe.
 
> Try to do something with x...
 
But it seemingly did.
Do you mean where I need traits? Like in:
 
impl<T: Display + PartialOrd> Point<T> {
fn cmp_display(&self) {
if self.x >= self.y { //< here I need PartialOrd
// here I need Display
println!("The largest member is x = {}", self.x);
} else {
println!("The largest member is y = {}", self.y);
}
}
}
 
That seems Ok type-safety feature.
Mel <mel@zzzzz.com>: Aug 30 03:42PM +0200

On Fri, 30 Aug 2019 15:18:21 +0300, Paavo Helde
> On 30.08.2019 14:37, Mel wrote:
> > On Fri, 30 Aug 2019 14:08:47 +0300, Paavo Helde
<myfirstname@osa.pri.ee>
> table-driven approach. The goal of this approach is to present zero
> execution-time overhead, until and unless an exception is thrown."
 
 
> The paper acknowledges some deficiencies with this goal, mainly
missing
> some optimization opportunities, then goes further to solve a part
of them.
 
 
> Note this paper is from 1998. The compiler technology has moved
forward
> meanwhile.
 
 
> It might be there is some minimal machine code needed for EH like a
> register load at function entrance. This does not make it non-zero
cost
> because the alternative is to use an error return, which would also
be
> at least a register load plus a branching instruction for checking
it in
> the caller.
 
I already said 'space or time'
 
--
Press any key to continue or any other to quit
Mel <mel@zzzzz.com>: Aug 30 03:43PM +0200

On Fri, 30 Aug 2019 14:24:24 +0200, Bonita Montero
> > Note this paper is from 1998. The compiler technology
> > has moved forward meanwhile.
 
 
> That this is from 1998 doesn't mean that it is relevant to each
imple-
> mentation. Win32 had structured exception handling from the
beginning
> with NT 3.1. And the exception-handling-implementation of the first
> Visual C++ version was built on top of that. So Microsoft hadn't any
> choice to revert that for newer version of VC++.
 
Under windows there is run tme cost because of that...
 
--
Press any key to continue or any other to quit
Mel <mel@zzzzz.com>: Aug 30 03:59PM +0200

On Fri, 30 Aug 2019 06:30:30 -0700 (PDT), Öö Tiib<ootiib@hot.ee>
wrote:
> t.ee=
 
> > > > wrote:
> > > > > > Rust generics are interface based. You cant do anything
with
> > > > > > generic type unless you tell which interfaces satisfies.
 
> > > > > Perhaps I don't understand what you mean. Can you bring
example?
> > > > > The main point of generics seems to work in Rust about like
in
> > C++
 
> > > > Not even close. Imagin that you have to cite what abstract
base
 
> > > That does not seem to fit with what you describe.
 
> > Try to do something with x...
 
 
> But it seemingly did.
No it just returned value. Don't try to win argument when you don't
have any...
> Do you mean where I need traits? Like in:
 
You always need traits even to initialize value...
 
 
 
 
 
> That seems Ok type-safety feature.
 
It is not safety feature it is how gemerics are implemented in rust.
Same way it works in Haskell.
 
--
Press any key to continue or any other to quit
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 04:23PM +0200

>> Microsoft hadn't any choice to revert that for newer version
>> of VC++.
 
> Under windows there is run tme cost because of that...
 
No, structured exception handling an the C++ Eh built on top of that
installs unwind-handleres linked through the stack on each call-level
that needs special unwind-semantics.
With the x64-Windows SEH was re-defined table-driven so that both SEH
and C++-EH have an almost-zero-overhead.
Melzzzzz <Melzzzzz@zzzzz.com>: Aug 30 02:40PM

> that needs special unwind-semantics.
> With the x64-Windows SEH was re-defined table-driven so that both SEH
> and C++-EH have an almost-zero-overhead.
 
Zero *run time* overhead at the cost of *space* overhead.
 
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
"Öö Tiib" <ootiib@hot.ee>: Aug 30 07:42AM -0700

On Friday, 30 August 2019 17:00:00 UTC+3, Mel wrote:
 
> > But it seemingly did.
> No it just returned value. Don't try to win argument when you don't
> have any...
 
I did not try to win argument, I admit I have none and I just did
not understand your argument (may be I still don't).
 
> > Do you mean where I need traits? Like in:
 
> You always need traits even to initialize value...
 
It is Ok I suppose. Writing those "std::enable_if"s for C++<=17
templates is most tedious and ugly.
 
> > That seems Ok type-safety feature.
 
> It is not safety feature it is how gemerics are implemented in rust.
> Same way it works in Haskell.
 
It feels similar to required upcoming concepts/constraints of C++20.
Melzzzzz <Melzzzzz@zzzzz.com>: Aug 30 02:44PM


>> It is not safety feature it is how gemerics are implemented in rust.
>> Same way it works in Haskell.
 
> It feels similar to required upcoming concepts/constraints of C++20.
 
Well, C++ template types are structural and Rust nominative.
That is whole difference.
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 04:57PM +0200

>> With the x64-Windows SEH was re-defined table-driven so that both SEH
>> and C++-EH have an almost-zero-overhead.
 
> Zero *run time* overhead at the cost of *space* overhead.
 
We're talking abot Windows-computers and not Ardino Unos.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 07:19PM +0200

>> bili naoruzani. -- Mladen Gogala
 
> IMHO, you are barking up the wrong tree.
> EH is bad not because it's slow or costly, but because of its messy semantics.
 
What's messy with the semantics?
Juha Nieminen <nospam@thanks.invalid>: Aug 30 09:03PM

> path)
>> already long time ago.
 
> That's simply impossible.
 
That's like saying that a longjmp() being zero-overhead (with respect to
all the code that's executed when no jump is done) is "impossible".
 
You can set up a long jump, and no other code that you call gets any
slower or suffers from any overhead. It's only when something triggers that
long jump that something special happens.
 
Exceptions are not identical to that, but it's not that far off. But the end
result is that if no exception is thrown, no code suffers from any overhead.
Ian Collins <ian-news@hotmail.com>: Aug 31 09:42AM +1200

On 30/08/2019 20:43, Mel wrote:
 
>> The non-throw-case takes almost no or no performance.
 
> That's no true. You have to install handlers evrry time or prepare
> tbla for each function at program start...
 
Have you actually measured the difference? I have on several occasions
and yes, the no-throw path with exceptions is faster (even if only
slightly) than testing error returns. It is also foolproof..
 
--
Ian.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 11:53PM +0200

> It is also foolproof..
 
It is not absolutely foolproof because there might be exceptions that
will be uncaught. That's the reason I like checked exceptions in Java
so much.
But it is by magnitudes more convenient as to return and evaluate
return-codes.
"Öö Tiib" <ootiib@hot.ee>: Aug 30 12:15AM -0700

On Friday, 30 August 2019 09:28:47 UTC+3, David Brown wrote:
> variant types. Perhaps he was less used to complex types and found
> "QMap<QString, QVariant>" to be simpler (in his eyes) than
> "QMap<QSTring, QMap<QString, int> >".
 
Yes, it may be was translated in his mind from Python's Qt support (like
PySide2.QtCore) where unneeded QVariant usage was harder to notice.
And what was needed was probably QMap<QSTring, QMap<int, QString> >"
anyway.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Digest for comp.lang.c++@googlegroups.com - 25 updates in 2 topics

Mel <mel@zzzzz.com>: Aug 30 07:42AM +0200

On Thu, 29 Aug 2019 11:02:30 +0200, David Brown
> One thing that I gather Rust does well, that C++ has trouble with,
is
> that when you do the Rust equivalent of std::move'ing something, any
> attempt to use the moved-from object is flagged by the compiler. I
> think that would be good to spot in C++.
 
Rust move is built into language. Moved objects destructor is not
called.
 
--
Press any key to continue or any other to quit
Mel <mel@zzzzz.com>: Aug 30 08:04AM +0200

On Thu, 29 Aug 2019 13:29:11 -0700, Keith Thompson <kst-u@mib.org>
wrote:
> > On 29/08/2019 04:36, Melzzzzz wrote:
> [...]
> >> Rust is nowhere near C. Actually, nowhere near C++. It is nice
language,
> >> but not for low level programming. During years they ditched GC
and most
> >> of the things that cripples determinism and performance, it is
ok, but
> >> without pointer arithmetic and null pointers it is high level
language.
> >> Having to convert pointer to int and back in order to perform
arithmetic
 
> What's terrible about it? Rust doesn't particularly need pointer
> arithmetic; unlike C, it's array indexing isn't defined in terms of
> pointers.
 
 
You need pointers for low level things. In rust you need unsafe
blocks for almost anything.
 
--
Press any key to continue or any other to quit
Mel <mel@zzzzz.com>: Aug 30 08:07AM +0200

On Thu, 29 Aug 2019 14:54:18 +0200, Bonita Montero
> > Rust is nowhere near C. Actually, nowhere near C++. It is nice
language,
> > but not for low level programming. During years they ditched GC
and most
> > of the things that cripples determinism and performance, ...
 
 
 
 
> Rust has no GC, but works with counted references.
 
Yes, but it is not much used. Box, owned reference is built in, i
zhink that rc is not built in
 
--
Press any key to continue or any other to quit
Mel <mel@zzzzz.com>: Aug 30 08:10AM +0200

On Thu, 29 Aug 2019 06:08:13 -0700 (PDT), Öö Tiib<ootiib@hot.ee>
wrote:
> > > ""Rust is the future of systems programming, C is the new Assem=
> bly":
> > > Intel principal engineer, Josh Triplett"
 
https://hub.packtpub.com/rust-is-the-future-of-systems-programming-c-is
=
 
> > > "At Open Source Technology Summit (OSTS) 2019, Josh Triplett, a
> > > Principal Engineer at Intel gave an insight into what Intel is
> > > contributing to bring the most loved language, Rust to full
parity with=
 
 
> > > C. In his talk titled Intel and Rust: the Future of Systems
Programming=
> ,
> > > he also spoke about the history of systems programming, how C
became th=
 
> > How about, instead:
> > We take C;
> > We add some features to support things like bounds-checked arrays
and
> > similar;
> > We mostly call it done.
 
 
> Rust supports lot of interesting safety features (like RAII, smart
pointers=
> ,
> ownership, generics, move semantics, thread safety).
> Adding some safe features at side does not help as history with C++
shows.
> Result is bigger language with all the unsafe stuff still there and
valid.
> OTOH if to cripple the unsafe features a bit then result is some
kind of
> fourth language that is neither C, C++ nor Rust.
 
Rust generics are interface based. You cant do anything with generic
type unless you tell which interfaces satisfies.
 
--
Press any key to continue or any other to quit
Keith Thompson <kst-u@mib.org>: Aug 29 11:11PM -0700

>> arithmetic; unlike C, it's array indexing isn't defined in terms of
>> pointers.
 
> You need pointers for low level things.
 
You need pointer *arithmetic* for far fewer things than you need
pointers for.
 
> In rust you need unsafe
> blocks for almost anything.
 
So use unsafe blocks.
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */
Mel <mel@zzzzz.com>: Aug 30 08:13AM +0200

On Thu, 29 Aug 2019 14:53:05 +0200, Bonita Montero
> Rust is nice, but Rust has no exceptions. I wouldn't like to
evaluate
> this combined error- and return-values after each function-call that
> might fail.
 
That's Rust advantage. Returnig discriminated union is right way for
handling errors. Rust has syntactic sugar in that. Latest proposal
for c++ exceptions leans toeard that solution.
 
--
Press any key to continue or any other to quit
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 08:38AM +0200

> Returnig discriminated union is right way for handling errors.
 
NO, that's a mess. Exceptions are by far more comfortable.
The best variant are the distinction between checked and
unchecked exceptions in Java.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 08:51AM +0200


> NO, that's a mess. Exceptions are by far more comfortable.
> The best variant are the distinction between checked and
> unchecked exceptions in Java.
 
And exceptions are more performant for the performance-relevant case
that no exception is thrown with table-driven excepion-handling. The
code-path that is processed when no exception is thrown is almost as
efficient as if there isn't any exception-handling (nearly zero-over-
head).
"Öö Tiib" <ootiib@hot.ee>: Aug 30 01:00AM -0700

On Friday, 30 August 2019 09:11:01 UTC+3, Mel wrote:
> > kind of fourth language that is neither C, C++ nor Rust.
 
> Rust generics are interface based. You cant do anything with
> generic type unless you tell which interfaces satisfies.
 
Perhaps I don't understand what you mean. Can you bring example?
The main point of generics seems to work in Rust about like in C++
templates. There are generic functions, data types and methods.
Instantiations of these work like concrete functions, data types and
methods.
It is more clear, efficient and/or type-safe than what can be done
in C.
Type erasure through "void*" or "..." function parameters is type-
unsafe and can be inefficient. Code generated with preprocessor
can be type safe and efficient but might have unneeded verbosity
in interface or to be hard to debug. So it is often better to have
copy-paste code in C.
Mel <mel@zzzzz.com>: Aug 30 10:13AM +0200

On Fri, 30 Aug 2019 08:51:21 +0200, Bonita Montero
> that no exception is thrown with table-driven excepion-handling. The
> code-path that is processed when no exception is thrown is almost as
> efficient as if there isn't any exception-handling (nearly
zero-over-
> head).
 
Exceptions take space or time, there is no escape, and they are non
deterministic...
 
--
Press any key to continue or any other to quit
David Brown <david.brown@hesbynett.no>: Aug 30 10:14AM +0200

On 29/08/2019 22:29, Keith Thompson wrote:
 
> What's terrible about it? Rust doesn't particularly need pointer
> arithmetic; unlike C, it's array indexing isn't defined in terms of
> pointers.
 
(Again, let me note that I know little about Rust - I've read some
stuff, but never used it. So if I get things wrong about it, I hope to
be corrected.)
 
As you say in your other post, pointer arithmetic isn't actually used
that much in C or C++, once you discount array access and iterators
which would be handled in a more "high level" manner in a non-C language.
 
But sometimes - even if it is only occasionally - it /is/ useful. This
can be especially true in low-level and systems code, which apparently
is a target for Rust.
 
The whole idea of Rust is to be "safer" than C - it aims to turn logical
errors such as misuse of pointers, confusion about owners, etc., into
compiler errors. That is a great ambition. But to work properly, you
have to follow the rules - workarounds that go outside those rules
should be as limited as possible, and strongly discouraged if they can't
be blocked by tools.
 
The language seems to be saying "We guarantee your code will avoid null
pointer dereferences, buffer overflows, use after free, memory leaks,
and all the pointer-related problems that plague C programmers." So
far, so good. Then it says "We realise that is too limiting sometimes,
so here are the ways to cheat, hack, and mess about under the hood".
Your guarantees are out the window, and you are back to /exactly/ the
same situation as you have in C - design carefully, code carefully to
keep your code safe, because the language and tools won't do it for you.
 
 
So it is not so much the idea of doing pointer arithmetic manually as
integer arithmetic that concerns me (though it might be a step lower
level than C's pointer arithmetic). It is the idea that the language
allows it and that it is considered an acceptable solution that bothers me.
Mel <mel@zzzzz.com>: Aug 30 10:16AM +0200

On Fri, 30 Aug 2019 01:00:48 -0700 (PDT), Öö Tiib<ootiib@hot.ee>
wrote:
> > generic type unless you tell which interfaces satisfies.
 
 
 
 
> Perhaps I don't understand what you mean. Can you bring example?
> The main point of generics seems to work in Rust about like in C++
 
Not even close. Imagin that you have to cite what abstract base
classes tvpe inherits, and only thing you can do with type is trough
interface...
 
--
Press any key to continue or any other to quit
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 10:18AM +0200

>> head).
 
> Exceptions take space or time, there is no escape, and they are non
> deterministic...
 
Computers are always deterministic. ;-)
But trowing exceptions is slow. But the case when we have a resouce-
or I/O-collapse isn't performance-relevant, i.e. it doesn't matter
if throwsing an exception lasts 10 or 1.000 clock-cycles.
"Öö Tiib" <ootiib@hot.ee>: Aug 30 01:26AM -0700

On Friday, 30 August 2019 11:16:32 UTC+3, Mel wrote:
 
> Not even close. Imagin that you have to cite what abstract base
> classes tvpe inherits, and only thing you can do with type is trough
> interface...
 
The examples in documentation ...
<https://doc.rust-lang.org/book/ch10-00-generics.html>
... are like that:
 
// generic type
struct Point<T> {
x: T,
y: T,
}
// generic method of generic type
impl<T> Point<T> {
fn x(&self) -> &T {
&self.x
}
}
// usage
fn main() {
let p = Point { x: 5, y: 10 };
 
println!("p.x = {}", p.x());
}
 
That does not seem to fit with what you describe.
Mel <mel@zzzzz.com>: Aug 30 10:31AM +0200

On Fri, 30 Aug 2019 10:18:02 +0200, Bonita Montero
> >> And exceptions are more performant for the performance-relevant
case
> >> that no exception is thrown with table-driven excepion-handling.
The
> >> code-path that is processed when no exception is thrown is
almost as
> > zero-over-
> >> head).
 
 
> > Exceptions take space or time, there is no escape, and they are
non
> But trowing exceptions is slow. But the case when we have a resouce-
> or I/O-collapse isn't performance-relevant, i.e. it doesn't matter
> if throwsing an exception lasts 10 or 1.000 clock-cycles.
 
I am not talking when you throw.
 
--
Press any key to continue or any other to quit
Mel <mel@zzzzz.com>: Aug 30 10:33AM +0200

On Fri, 30 Aug 2019 01:26:49 -0700 (PDT), Öö Tiib<ootiib@hot.ee>
wrote:
> > > > generic type unless you tell which interfaces satisfies.
 
> > > Perhaps I don't understand what you mean. Can you bring example?
> > > The main point of generics seems to work in Rust about like in
C++
 
> > Not even close. Imagin that you have to cite what abstract base
> > classes tvpe inherits, and only thing you can do with type is
trough
 
> println!("p.x = {}", p.x());
> }
 
 
> That does not seem to fit with what you describe.
 
Try to do something with x...
 
--
Press any key to continue or any other to quit
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 10:39AM +0200

>> or I/O-collapse isn't performance-relevant, i.e. it doesn't matter
>> if throwsing an exception lasts 10 or 1.000 clock-cycles.
 
> I am not talking when you throw.
 
The non-throw-case takes almost no or no performance.
Evaluating return-codes is simply a magnitude slower.
Mel <mel@zzzzz.com>: Aug 30 10:43AM +0200

On Fri, 30 Aug 2019 10:39:19 +0200, Bonita Montero
> >> Computers are always deterministic. ;-)
> >> But trowing exceptions is slow. But the case when we have a
resouce-
> >> or I/O-collapse isn't performance-relevant, i.e. it doesn't
matter
> >> if throwsing an exception lasts 10 or 1.000 clock-cycles.
 
 
> > I am not talking when you throw.
 
 
> The non-throw-case takes almost no or no performance.
 
That's no true. You have to install handlers evrry time or prepare
tbla for each function at program start...
> Evaluating return-codes is simply a magnitude slower.
 
This is especially not true.
 
--
Press any key to continue or any other to quit
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 10:47AM +0200

>> Evaluating return-codes is simply a magnitude slower.
 
> This is especially not true.
 
No, this is true.
You don't simply unterstand the implications of table-driven
exception handling or table-driven exception-handling at all.
David Brown <david.brown@hesbynett.no>: Aug 30 11:41AM +0200

On 30/08/2019 10:47, Bonita Montero wrote:
>>> Evaluating return-codes is simply a magnitude slower.
 
>> This is especially not true.
 
> No, this is true.
 
I take it you have measured this yourself? You have sample code,
measurements, statistics to back up this very specific claim?
 
Or is it just based on "something you read somewhere", repeated in the
hope that people will believe you more the second time?
 
 
> You don't simply unterstand the implications of table-driven
> exception handling or table-driven exception-handling at all.
 
I expect Mel does understand it. I am less convinced /you/ do.
 
 
Different error handling techniques have their pros and cons. Anyone
claiming one method is faster, safer, clearer, easier than other methods
is guaranteed to be /wrong/. They will have different balances, and
there is no solution that is the "best" in all situations, or the best
in all ways.
 
Table-driven exception handling tends to be efficient at run-time when
exceptions are not thrown, when you are running on larger processors
(like x86), and when you have no issues with code space. That makes it
a good choice (from the efficiency viewpoint) on PC programming.
 
But you can also find that the possibility of exceptions passing through
functions can affect their efficiency, even if they don't throw or
catching anything directly - it can affect ordering and optimisation of
the code by the compiler. It can affect the re-use of stack slots as
the compiler has to track all possible exception points and be able to
unwind correctly.
 
On resource-constrainted systems, this kind of thing can lead to unwind
tables that are bigger than the actual code size - a horrible level of
inefficiency that is unacceptable in many cases.
 
And while table-driven exception handling does not need extra
instruction codes (just data tables) for functions that pass on
exceptions without handling them, you need plenty of extra code - and
run-time - when throwing the exception, or when catching it.
 
 
Error handling via return codes, on the other hand, uses minimal extra
code space in functions that raise or handle the errors - usually less
overhead than for throwing or catching exceptions. It is certainly
going to be much faster when errors do occur. But you have to have some
way of passing on errors in functions between the "thrower" and the
"catcher". That means more explicit source code, but more efficient
object code.
 
 
Each technique has, as I said, its pros and cons. And there are lots of
reasons why one might pick one method over the other beyond efficiency -
do you want your error handling clear and explicit, or automatic and
hidden? Which is easier to get right, and which is harder to get wrong?
 
 
But one thing we can be sure of - Sutter would not have started the
process of getting "Zero-overhead deterministic exceptions" into C++ if
current C++ exceptions were as ideal as suggested. And while Sutter's
ideas here may come to naught, and he can be as wrong as anyone else, I
am more inclined to listen to his thoughts than those of Bonita.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 30 11:56AM +0200

> I expect Mel does understand it. I am less convinced /you/ do.
 
Disassemble the code that calls a fuction that might throw an exceptions
with an without enabled EH. The difference is only, that the optimiza-
tions around that call might be slightly weakened; that's all. But that
usually doesn't outweigh the overhead against manually evaluating
return-codes.
 
> Different error handling techniques have their pros and cons. Anyone
> claiming one method is faster, safer, clearer, easier than other methods
> is guaranteed to be /wrong/.
 
1. Table-driven EH is usually faster.
2. I didn't say that it is safer or clearer,
but it is much more convenient.
3. When you come to the excepion-handling mechanism like in Java
with checked and unchecked exceptions, you get the best solution.
But for compatibility-reasons with C this isn't possible in C++.
 
> exceptions are not thrown, when you are running on larger processors
> (like x86), and when you have no issues with code space. That makes it
> a good choice (from the efficiency viewpoint) on PC programming.
 
Almost right, but it doesn't depend on the performance of the CPU.
Even on small CPUs the performance of processing a resurce- or I/O
-collapse doesn't count.
 
> the code by the compiler. It can affect the re-use of stack slots as
> the compiler has to track all possible exception points and be able to
> unwind correctly.
 
The loss of optimization-opportunities with table-driven exception
-handling is very slight. And the cost is of course not so high as
handling return-codes manually.
 
> On resource-constrainted systems, this kind of thing can lead to unwind
> tables that are bigger than the actual code size - a horrible level of
> inefficiency that is unacceptable in many cases.
 
That's true, but then you wouldn't use C++.
 
> Error handling via return codes, on the other hand, uses minimal extra
> code space in functions that raise or handle the errors - usually less
> overhead than for throwing or catching exceptions.
 
1. This case isn't performance relevant.
2. Yo cannot say that this is always true. The thing is simply that
there might be a number of return-codes evaluated at each call-level
which might result in a higher total CPU-time.
David Brown <david.brown@hesbynett.no>: Aug 30 01:08PM +0200

On 30/08/2019 11:56, Bonita Montero wrote:
> tions around that call might be slightly weakened; that's all. But that
> usually doesn't outweigh the overhead against manually evaluating
> return-codes.
 
There is no doubt that exceptions are aimed at having minimal run-time
cost when they are not thrown. But it is important to realise that
/minimal/ is not /zero/.
 
>> claiming one method is faster, safer, clearer, easier than other methods
>> is guaranteed to be /wrong/.
 
> 1. Table-driven EH is usually faster.
 
Usually it is faster when exceptions are not thrown, but a lot slower
when they /are/ thrown. But "usually" is not "always". And for some
situations, it is the worst case performance - the speed when there is a
problem - that is critical.
 
I'm happy with a claim "Table driven exceptions are generally faster,
when there are no errors, than other handling techniques for typical
code on PC's". What I am /not/ happy with is "Exception handling is
faster than other error return techniques".
 
> 2. I didn't say that it is safer or clearer,
>    but it is much more convenient.
 
Fair enough - though convenience is also a subjective matter.
 
> 3. When you come to the excepion-handling mechanism like in Java
>    with checked and unchecked exceptions, you get the best solution.
>    But for compatibility-reasons with C this isn't possible in C++.
 
Checked exceptions - where the type of exceptions that a function may
throw or pass on is part of the signature - are safer, clearer and more
efficient. (Compilers can use tables or code branches, whichever is
most convenient). But they involve more explicit information, which
some people like and others dislike. And they can't pass through
functions compiled without knowledge of the exceptions - such as C
functions compiled by a C compiler.
 
 
> Almost right, but it doesn't depend on the performance of the CPU.
> Even on small CPUs the performance of processing a resurce- or I/O
> -collapse doesn't count.
 
Incorrect.
 
Performance under the worst case is often a critical feature of high
reliability systems. It is precisely because the performance exception
handling is difficult to predict or measure that exceptions are banned
in most such systems.
 
 
> The loss of optimization-opportunities with table-driven exception
> -handling is very slight. And the cost is of course not so high as
> handling return-codes manually.
 
That may often be the case, but I would be very wary of generalising too
much.
 
>> tables that are bigger than the actual code size - a horrible level of
>> inefficiency that is unacceptable in many cases.
 
> That's true, but then you wouldn't use C++.
 
Nonsense. There are many good reasons to use C++, and many features
that can be used, without leading to any inefficiency or bloat.
 
>> code space in functions that raise or handle the errors - usually less
>> overhead than for throwing or catching exceptions.
 
> 1. This case isn't performance relevant.
 
Often not - but you are wrong to generalise like that. And you can
equally well argue that since exception handling is often used in cases
where you are dealing with I/O, the performance cost of manual return
code checks is negligible and therefore irrelevant with such code.
 
> 2. Yo cannot say that this is always true. The thing is simply that
>    there might be a number of return-codes evaluated at each call-level
>    which might result in a higher total CPU-time.
 
Yes, I have already mentioned that return code checking usually involves
some handling at functions along the chain between the "thrower" and the
"catcher".
Paavo Helde <myfirstname@osa.pri.ee>: Aug 30 02:08PM +0300

On 30.08.2019 11:43, Mel wrote:
> for each function at program start...
 
>> Evaluating return-codes is simply a magnitude slower.
 
> This is especially not true.
 
Your information is out of date for a couple of decades. The current C++
compilers have implemented zero cost exceptions (in the no-throw path)
already long time ago.
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 29 08:14PM -0400

On 8/29/19 2:14 PM, Stefan Große Pawig wrote:
> found only the QMap<QString, QVariant> as baked-in return value and then
> assumed that this was the only map type that could be stored in a
> QVariant?
The key point is that the author of the code freely chose to declare the
part map as QMap<QString, QVariant>, where the QVariant was used
exclusively to hold integers. He also freely choose to declare the
message map as QMap<QString, QVariant>, where the QVariant was used to
store a part map. There was no code outside the code he wrote which gave
him any excuse for using QVariant for both purposes - no code outside of
what he wrote used QVariant to interface with the code he did wrote.
Yes, he did use toMap(), to extract the map from the QVariant, but he
was the one who wrote the code to put the map into the QVariant.
 
If he'd used QMap<QString, int> for the part map, and QMap<QSTring,
QMap<QString, int> > for the message map, it would all have worked fine
(except for the bug that I fixed), and he would have been able to
extract the map directly, rather than by calling toMap(). So why did he
use QVariant instead? I'm getting the impression that he used QVariant
without bothering to consider whether it was actually needed.
David Brown <david.brown@hesbynett.no>: Aug 30 08:28AM +0200

On 30/08/2019 02:14, James Kuyper wrote:
> extract the map directly, rather than by calling toMap(). So why did he
> use QVariant instead? I'm getting the impression that he used QVariant
> without bothering to consider whether it was actually needed.
 
I can imagine a few possible explanations. One is that, as I mentioned
earlier, either the code or the programmer comes from a background of
dynamically typed language - he may simply have felt comfortable with
variant types. Perhaps he was less used to complex types and found
"QMap<QString, QVariant>" to be simpler (in his eyes) than
"QMap<QSTring, QMap<QString, int> >".
 
For a long time, templates had a reputation for program size bloat in
C++. (In the failed "EC++" "Embedded C++" standard, templates were
banned for that reason. Though $DEITY only knows why namespaces were
banned too.) Some people were left with the impression that you should
avoid instantiating too many different template types, to reduce code
bloat and duplication. Maybe he thought re-using a single map type was
more efficient in program size than using two different types.
 
(I'm not suggesting that this is a /good/ reason, even if it were the
case that it would give smaller code - something I doubt.)
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

Thursday, August 29, 2019

Digest for comp.lang.c++@googlegroups.com - 25 updates in 5 topics

Melzzzzz <Melzzzzz@zzzzz.com>: Aug 29 02:36AM

> he also spoke about the history of systems programming, how C became the
> "default" systems programming language, what features of Rust gives it
> an edge over C, and much more."
 
Rust is nowhere near C. Actually, nowhere near C++. It is nice language,
but not for low level programming. During years they ditched GC and most
of the things that cripples determinism and performance, it is ok, but
without pointer arithmetic and null pointers it is high level language.
Having to convert pointer to int and back in order to perform arithmetic
is worse that doing it in assembler.
But what Rust got it right is error handling, much better than
excpetions. But not having inheritance and forbidding two mutable
references to distinct array elements is stupid. Also no tail recursion
optimisation and no way to disable array checks.
 
 
 
--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala
David Brown <david.brown@hesbynett.no>: Aug 29 11:02AM +0200

On 29/08/2019 04:36, Melzzzzz wrote:
> without pointer arithmetic and null pointers it is high level language.
> Having to convert pointer to int and back in order to perform arithmetic
> is worse that doing it in assembler.
 
That sounds terrible. (I am not very familiar with Rust.)
 
> But what Rust got it right is error handling, much better than
> excpetions. But not having inheritance and forbidding two mutable
> references to distinct array elements is stupid.
 
I have worked with a C-like language which had that many of that kind of
restriction (XC for the XMOS microcontrollers, if anyone is curious).
For a lot of code, it can be helpful - it allowed the compiler to find
lots of situations that could be problematic, especially in heavily
threaded and parallel code. But you invariably had situations where the
restrictions were too much - you needed shared access, or whatever, in
cases where you know they are safe. This typically lead to inline
assembly code to get round the safety limitations imposed by the
language and compiler - that was /not/ a good situation!
 
> Also no tail recursion optimisation and no way to disable array checks.
 
Surely tail recursion optimisation is just a matter of the compiler? C
and C++ don't have any mention of such optimisations in their standards,
but a compiler can do it as long as the code works under the "as if"
rule. Does the same not apply to Rust?
 
As for array checks, again I would think a compiler could elide a fair
number of them, reducing the cost of having them on all the time.
 
 
One thing that I gather Rust does well, that C++ has trouble with, is
that when you do the Rust equivalent of std::move'ing something, any
attempt to use the moved-from object is flagged by the compiler. I
think that would be good to spot in C++.
BGB <cr88192@gmail.com>: Aug 29 05:14AM -0500

On 8/28/2019 4:11 PM, Lynn McGuire wrote:
> he also spoke about the history of systems programming, how C became the
> "default" systems programming language, what features of Rust gives it
> an edge over C, and much more."
 
How about, instead:
We take C;
We add some features to support things like bounds-checked arrays and
similar;
We mostly call it done.
 
 
Many past variants of bounds-checked arrays did funky things with the
syntax, which is a problem if one wants to be able to make it work in a
compiler without support for this feature (eg: via non-convoluted use of
defines).
 
My thought is, say:
_BoundsCheck int *arr;
Which behaves sorta like "int[] arr;" in Java or similar.
 
Then it is given a shorthand in a header (eg: bounded), which becomes
no-op if the compiler doesn't support it.
 
Otherwise, it behaves mostly like a normal pointer, apart from potential
runtime bounds checking, and a different internal representation. May
decay into a normal pointer, potentially with a warning.
 
 
While at it, something like a standardized header for accessing
misaligned and explicit endianess values would be nice.
 
eg:
uint32_t getuint32le(void *ptr);
void setint32le(void *ptr, int32_t val);
...
Where: le: little endian, be: big endian, no suffix: native endian.
And, these functions work regardless of alignment.
 
This being semi-common boilerplate one either needs to copy around or
reimplement fairly often (and for which the optimal way to do these
varies between compilers; and with some dependence on target architecture).
 
 
Ian Collins <ian-news@hotmail.com>: Aug 29 10:21PM +1200

On 29/08/2019 21:02, David Brown wrote:
> that when you do the Rust equivalent of std::move'ing something, any
> attempt to use the moved-from object is flagged by the compiler. I
> think that would be good to spot in C++.
 
clang-tidy does a pretty good job with "bugprone-use-after-move"
<https://clang.llvm.org/extra/clang-tidy/checks/bugprone-use-after-move.html>
 
--
Ian.
David Brown <david.brown@hesbynett.no>: Aug 29 01:24PM +0200

On 29/08/2019 12:21, Ian Collins wrote:
>> think that would be good to spot in C++.
 
> clang-tidy does a pretty good job with "bugprone-use-after-move"
> <https://clang.llvm.org/extra/clang-tidy/checks/bugprone-use-after-move.html>
 
Yes, I saw that after googling a bit.
 
I would have thought, however, that pretty much any use (other than
assignment of some sort) of a named object after it is moved, is likely
to be a bug. And it is therefore something that could be warning about
by default.
 
 
#include <memory>
 
extern int a, b, c;
 
 
void foo(void) {
auto p = std::make_unique<int>(3);
a = *p;
auto q = std::move(p);
b = *q;
c = *p;
}
 
 
Both clang and gcc are smart enough to move the value 3 directly into
"a" and "b", and if the last line is omitted, they can remove the call
to "new" (at least in their trunk versions). Both are smart enough to
know that "c = *p;" is undefined behaviour here, and generate "ud2"
instructions on x86.
 
Yet neither compiler warns about this clearly wrong behaviour despite
-Wall -Wextra.
Ian Collins <ian-news@hotmail.com>: Aug 29 11:35PM +1200

On 29/08/2019 23:24, David Brown wrote:
> instructions on x86.
 
> Yet neither compiler warns about this clearly wrong behaviour despite
> -Wall -Wextra.
 
Use a bigger hammer!
 
$ clang-tidy x.cc
 
1 warning generated.
/tmp/x.cc:11:9: warning: Dereference of null smart pointer 'p' of type
'std::unique_ptr' [clang-analyzer-cplusplus.Move]
c = *p;
^
/tmp/x.cc:9:14: note: Smart pointer 'p' of type 'std::unique_ptr' is
reset to null when moved from
auto q = std::move(p);
^
/tmp/x.cc:11:9: note: Dereference of null smart pointer 'p' of type
'std::unique_ptr'
c = *p;
 
--
Ian.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 29 02:53PM +0200

Rust is nice, but Rust has no exceptions. I wouldn't like to evaluate
this combined error- and return-values after each function-call that
might fail.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 29 02:54PM +0200

> Rust is nowhere near C. Actually, nowhere near C++. It is nice language,
> but not for low level programming. During years they ditched GC and most
> of the things that cripples determinism and performance, ...
 
 
Rust has no GC, but works with counted references.
"Öö Tiib" <ootiib@hot.ee>: Aug 29 06:08AM -0700

On Thursday, 29 August 2019 13:14:29 UTC+3, BGB wrote:
> We add some features to support things like bounds-checked arrays and
> similar;
> We mostly call it done.
 
Rust supports lot of interesting safety features (like RAII, smart pointers,
ownership, generics, move semantics, thread safety).
Adding some safe features at side does not help as history with C++ shows.
Result is bigger language with all the unsafe stuff still there and valid.
OTOH if to cripple the unsafe features a bit then result is some kind of
fourth language that is neither C, C++ nor Rust.
gazelle@shell.xmission.com (Kenny McCormack): Aug 29 01:14PM

In article <qk8hrh$ua3$1@news.albasani.net>,
>Rust is nice, but Rust has no exceptions. I wouldn't like to evaluate
>this combined error- and return-values after each function-call that
>might fail.
 
Why is this posted to CLC? Surely, the comparison is more valid to
C++, given that C++ has things like exceptions and so on. It should have
only been posted to CLC++.
 
(Or, is there a parallel thread going on in CLC++ - that I am not (yet)
aware of?)
 
P.S. Of course there will be a blogs and stuff like this one (the one
referred to in the OP) telling us that Rust is the next big thing and
obviously the thing that is going to be just so absolutely neato keeno that
you'll be unable to imagine how we ever got along with it. This is true
for any new programming language.
 
Look at it this way: If a new programming language came out and there was
no blog post (or whatever) telling us with absolute authority that this is
the absolute greatest thing since, well, ever - that language will have
zero, zip, nada chance of ever being used by anyone.
 
--
Pensacola - the thinking man's drink.
Bonita Montero <Bonita.Montero@gmail.com>: Aug 29 03:38PM +0200

>> this combined error- and return-values after each function-call that
>> might fail.
 
> Why is this posted to CLC? ...
 
Why do you post another Follow-Up there if you consider this as
inadequate.
David Brown <david.brown@hesbynett.no>: Aug 29 08:00PM +0200

On 29/08/2019 13:35, Ian Collins wrote:
 
>> Yet neither compiler warns about this clearly wrong behaviour despite
>> -Wall -Wextra.
 
> Use a bigger hammer!
 
I might well do that. But it won't stop me wanting the feature in the
tools I already use.
 
Keith Thompson <kst-u@mib.org>: Aug 29 01:29PM -0700

> On 29/08/2019 04:36, Melzzzzz wrote:
[...]
>> Having to convert pointer to int and back in order to perform arithmetic
>> is worse that doing it in assembler.
 
> That sounds terrible. (I am not very familiar with Rust.)
 
What's terrible about it? Rust doesn't particularly need pointer
arithmetic; unlike C, it's array indexing isn't defined in terms of
pointers.
 
[...]
 
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Will write code for food.
void Void(void) { Void(); } /* The recursive call of the void */
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 28 08:46PM -0400

On 8/28/19 10:22 AM, Jens Kallup wrote:
> Hello,
 
> QMap, and other template class "extend" the
> corespond stdC++ std::map<k,v> class.
 
Yes, I know that much about Qt. My point was that, as far as I can tell,
the differences between the Qt classes and the standard classes are not
relevant to the issues I was raising. If I'm wrong about that, please
let me know.
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 28 08:58PM -0400

On 8/28/19 12:00 PM, Öö Tiib wrote:
>> for transmission. Each part is transmitted along with a message ID, a
>> part number, and the total number of parts.
 
> How can receiver realize that all parts have been received?
 
All parts have been received as soon as the number of different parts
that have been collected for a given message ID matches the part count.
 
> Is the message part count known compile time
 
No, it varies with the length of the message provided by the user.
 
> ... and part numbers
> sequential and zero-based?
 
The part numbers are sequential in the order that the parts should occur
in. They are sent out sequentially, but in general, they don't arrive
sequentially. They are 1-based, not 0-based.
 
...
> did change into simpler and so it was cut out from something
> way more overcomplicated. The file change history from repo
> (and commit comments) may illuminate it a bit (if such exist).
 
The entire block of code containing all of the features I described was
added in 2015, and hasn't been changed since. Prior to that, multi-part
messages weren't supported. The commit message didn't provide any
explanation.
 
I was wrong about the person responsible being unavailable - but I'm
having trouble figuring out how to word an inquiry that doesn't make it
sound like I think he wrote something stupid (since, in fact, that is
what I'm thinking).
 
> Results of developers with different skill levels differ by several
> orders of magnitude, but no one notices when that inefficiency
> is in some non-critical part of program.
 
Agreed - the messages are provided by users, and therefore don't arrive
frequently enough for the inefficiency of this approach to be a
significant problem. The inefficiency bothers me, but my boss wouldn't
have let me work on something this low in priority if I weren't
currently stuck between a project I've finished and a project I'm not
allowed to start work on (we're waiting for approval from the client).
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 28 09:05PM -0400

On 8/28/19 2:19 PM, Mike Terry wrote:
> On 28/08/2019 14:30, James Kuyper wrote:
...
>> the corresponding keys to reconstruct the original message.
 
> That sounds so strange, that I wonder if there's a simple explanation:
> the original coder just confused the ordering of the two map parameters?
 
No, that explanation doesn't work. The code includes a comment
indicating that he would have preferred to use QMap<int, QString>, but
failed to clearly explain why he felt that he couldn't use it.
 
 
> Perhaps the intention was to have a map by part number. If multiple
> part numbers were encountered they would be deemed retransmissions and
> it would be OK to just keep the latest one. ...
 
I was rather confused by the fact that every part arrives at least three
times - I suspect that's a bug, but I haven't tracked it down yet.
 
...
> issues coming up would just be worked around. (Like, "hey, seems I need
> to sort by values to get things in the right order?". Yes, seems a bit
> crazy, but maybe the coder was new.
 
That's what I suspect.
 
...
> this. (Put another way, maybe the part map in message map either was,
> or seemed (to the coder) to effectively be a const entry that could only
> be updated through replacement, not modification...)
 
Again, that's pretty much the confusion that I suspect happened.
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 28 09:08PM -0400

On 8/28/19 2:22 PM, Stefan Große Pawig wrote:
> type, including possible conversions. Among these is toMap(), which
> returns a QMap<QString, QVariant>.
 
> Is that map of yours stored in another QVariant?
 
No. The original message map was declared as QMap<QString, QVariant>,
and the QVariant values were filled in with part maps of the type
QMap<QString, int>. I successfully changed the message map to
QMap<QString, QMap<int, QString> > without breaking anything, and that's
because it doesn't interact with any other code that expected QVariants.
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 28 09:11PM -0400

On 8/28/19 3:33 PM, Jorgen Grahn wrote:
> On Wed, 2019-08-28, James Kuyper wrote:
...
 
> That's such an obvious flaw that I'd not trust the rest of that
> programmers choices to make sense. It's simply not a sane way to do
> the trivial task of message reassembly.
 
I agree.
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 28 09:16PM -0400

On 8/28/19 9:08 PM, James Kuyper wrote:
> No. The original message map was declared as QMap<QString, QVariant>,
> and the QVariant values were filled in with part maps of the type
> QMap<QString, int>. I successfully changed the message map to
Correction: QMap<QString, QVariant>.
 
David Brown <david.brown@hesbynett.no>: Aug 29 10:24AM +0200

On 29/08/2019 02:58, James Kuyper wrote:
 
> The part numbers are sequential in the order that the parts should occur
> in. They are sent out sequentially, but in general, they don't arrive
> sequentially. They are 1-based, not 0-based.
 
I would expect that while the parts /could/ arrive out of order, they
will come mostly roughly in-order.
 
Since you know how many parts you will get (once you have received one
part), and you know the part numbers are all 1 .. n, it seems strange to
me that you have a map at all. I'd expect to use a vector,
pre-allocated to the right size when the first part of a new message
arrives. Perhaps you'd want the vector elements to be optional<string>,
or just have empty strings for parts that haven't yet arrived. You
might also want to track a counter of "parts waiting" so that you can
easily see that the last part has arrived, without running through the
vector.
 
> having trouble figuring out how to word an inquiry that doesn't make it
> sound like I think he wrote something stupid (since, in fact, that is
> what I'm thinking).
 
There are other possibilities for this odd data structure - other than
stupidity or inexperience. It could be that the requirements or planned
handling system changed underway - maybe the parts had a different
identification system originally rather than simple integers. It could
be that the code was prototyped in a more dynamic language, such as
Python, or at least one where "variant" types are more common, such as
Visual Basic, and the code then got translated into C++. And of course
it could have been written under inhumane deadlines in the small hours
of the night while overdosed on coffee and cold pizza - a lot of odd
code has been written in such circumstances.
 
 
> have let me work on something this low in priority if I weren't
> currently stuck between a project I've finished and a project I'm not
> allowed to start work on (we're waiting for approval from the client).
 
Inefficiency is often not a huge problem - incorrectness is.
James Kuyper <jameskuyper@alumni.caltech.edu>: Aug 29 10:05AM -0400

On 8/29/19 4:24 AM, David Brown wrote:
> On 29/08/2019 02:58, James Kuyper wrote:
...
>> sequentially. They are 1-based, not 0-based.
 
> I would expect that while the parts /could/ arrive out of order, they
> will come mostly roughly in-order.
 
That was my expectation, too. Reality didn't cooperate. During
debugging, I never saw the parts arrive in strict sequential order, and
they often arrived completely jumbled. Sometimes, for small messages,
the final part would be the first one to arrive.
 
That triggered another bug, but I've simplified my explanation of the
code too much to explain why without adding a lot more details. Suffice
it to say that the author apparently expected the final part to always
be the one that arrived last - despite the fact that, if the parts were
guaranteed to arrive in order, he could have simply appended each part
to a string containing the parts that had already been received. That
bug didn't puzzle me as much as the other issues I've mentioned, which
is why I didn't mention it.
 
> might also want to track a counter of "parts waiting" so that you can
> easily see that the last part has arrived, without running through the
> vector.
 
I thought about the vector approach, but with the map approach, the
size() member serves as parts_received, and the total count of parts is
received with every part, so parts_waiting is just the difference. With
the vector approach I'd have to keep track of one additional quantity
(either parts_received, as I thought, or parts_waiting, per your
suggestion). That adds just a little additional complexity, but enough
to discourage me from using that approach.
...
> identification system originally rather than simple integers. It could
> be that the code was prototyped in a more dynamic language, such as
> Python, or at least one where "variant" types are more common, such as
 
That's plausible - some parts of this project are written in python.
 
...
>> currently stuck between a project I've finished and a project I'm not
>> allowed to start work on (we're waiting for approval from the client).
 
> Inefficiency is often not a huge problem - incorrectness is.
 
The code is incorrect, but cases which trigger the bug are extremely
unlikely. With the inefficiency being small, and the incorrectness
unlikely, this is justifiably a low-priority bug. I definitely have more
important things to work on as soon as the work gets approved (I was
investigating one of those more important things when I found this bug).
usenet@stegropa.de (Stefan Große Pawig): Aug 29 08:14PM +0200

[some context restored]
 
>> and the QVariant values were filled in with part maps of the type
>> QMap<QString, int>. I successfully changed the message map to
> Correction: QMap<QString, QVariant>.
 
But then, IIUC, that fits the bill: the QMap<QString, QVariant> for the
parts /was/ stored in another QVariant (namely, as the value type of the
message map).
 
If the code used QVariant::value() to fetch the part map from the message
map, there is no restriction on the type of the part map. The restriction
only arises if the caller used QVariant::toMap().
 
Regardless of what the code originally used, maybe the original author
had just scanned the QVariant interface for anything related to maps,
found only the QMap<QString, QVariant> as baked-in return value and then
assumed that this was the only map type that could be stored in a
QVariant?
 
-Stefan
Jorgen Grahn <grahn+nntp@snipabacken.se>: Aug 29 09:58AM

On Wed, 2019-08-28, Richard wrote:
> approach that many people have tried and failed. It's a really awful
> idea. That suggestion should not be left out in the open
> unchallenged.
 
Yes, that was also worth pointing out, for other readers.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Jorgen Grahn <grahn+nntp@snipabacken.se>: Aug 29 06:09AM

On Wed, 2019-08-28, Ruki wrote:
>> excellent reason.)
 
> Not all os provide spinlocks, and like nginx, they also use their
> own spinlock implementation.
 
I note that that doesn't really answer David's question.
 
Weird that nginx implements spinlocks, when it also advertises itself
as single-threaded. I don't see the use of spinlocks in a situation
where there's only one thread of execution.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Tim Rentsch <tr.17687@z991.linuxsc.com>: Aug 28 06:03PM -0700

"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com> writes:
 
I have now carefully read through all your responses in the
cascade of messages downstream of my message two upthread.
 
> I may address the rest of your posting later, trying to give the
> examples etc. you ask for, because I feel you mean this seriously.
 
I do mean it seriously, and not at all confrontationally. I am
interested to read your further followup (ie, as mentioned above)
if you choose to post one.
 
 
> And I don't think you think you're smarter than everybody else in
> this group just for holding an opinion on UB that nobody else here
> shares.
 
For the most part I don't think about whether I am smarter than
any particular person, let alone everyone in the group, partly
because "smart" is a multi-dimensional quantity, and it's very
rare (if indeed it has ever happened at all) that I find a person
where I think I am smarter along all axes. Even if it were true
that my opinion on a subject were not shared by any other
contributor in the group, that would not affect my sense of what
"smarter" means.
 
 
> I think you're simply convinced that you're right,
 
In my view the word "right" does not apply, because I take both
of our statements as expressing opinions about how the standards
"should" be interpreted. Opinions are never right or wrong, they
just are whatever they are.
 
> that your distinctions are meaningful,
 
I do think the distinctions I have been trying to explain are
meaningful.
 
> and somehow my and others' arguments have failed to move your
> conviction.
 
I think an essential first step is missing. For my views to
change as a result of what you say (or vice versa), I must first
understand what you mean and why you think what you do (similarly
vice versa). Right now I think we are not understanding each
other. Key example: the term "undefined behavior". Even though
we both use the term "undefined behavior", I think what you mean
by the term is not the same as what I mean by it. If that is so
then obviously it gets in the way of us communicating. Normally
in discussions like this one, my efforts are directed at trying
to convey what I think and why, and not to try to convince the
other participant(s) to change their minds.
 
 
> Which I think means we're not good enough at presenting our
> view. Not, that we're wrong. ;-)
 
Again, I don't think either of our views is right or wrong. I
agree though that explaining what it is you think should be given
more emphasis than trying to convince me that you're right.
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.