Wednesday, February 24, 2016

Digest for comp.lang.c++@googlegroups.com - 25 updates in 4 topics

David Brown <david.brown@hesbynett.no>: Feb 23 10:22PM +0100

On 23/02/16 22:09, Gareth Owen wrote:
> Ian Collins <ian-news@hotmail.com> writes:
 
>> The sky is green.
 
> I think you'll find the sky is *considered* green.
 
Apparently* the ancient Greeks considered it to be a sort of bronze
colour, because they didn't have a word for "blue". Maybe Mr. Flibble
considers TDD to be harmful because he doesn't have a word for "useful".
 
 
* According to the infallible QI elves. What more evidence could one
ask for?
legalize+jeeves@mail.xmission.com (Richard): Feb 24 05:12PM

[Please do not mail me a copy of your followup]
 
Gareth Owen <gwowen@gmail.com> spake the secret code
>> time you have invested into TDD and your unwillingness to lose face if
>> you back out of it now
 
>I believe this is known as "projection"
 
This really did make me laugh out loud.
 
I have been programming for a really long time (decades) and been doing
TDD since roughly 2007. During that time I have always sought to learn
new ways of programming to improve my productivity and/or the quality
of the code I produce. I value quality, so I am willing to sacrifice
some productivity in order to obtain better quality. TDD was the
first thing that felt like I didn't have to give up productivity in
order to obtain quality, provided we look at the total time spent
developing a feature and not just the time it took to type in the
code and make it compile.
 
In practical terms, this means that I am willing to write automated tests
around my code in order to prove that my code works now and continues
to work properly in the future. Clearly that could be a net productivity
loss in the short-term because I spent the time to write the automated
tests. When you're the first person on the team to write some
automated tests, you also have to introduce the infrastructure to
support such tests into the code base.
 
I say short-term productivity cost because the time spent writing tests
and finding stupid mistakes early is offset by time spent later in the
debugger finding the cause of strange bugs reported by other developers
on my team, the testing team or customers. I am very proficient in a
debugger so most of the time I am able to quickly identify the root
cause of a bug with the debugger. However, I find the same stupid
mistakes more efficiently when I write automated tests around my code.
I haven't really noticed a significant decline in my error rate over
time, but the tools I use are more adept at finding my mistakes so the
total time between creation, discovery and correction of mistakes has
been made shorter over time. TDD doesn't reduce my error rate but it
does reduce the total time spent identifying and correcting errors.
 
In earlier posts I have described the benefit of changing design from
a phase to a daily activity that I have found when practicing TDD.
As a result I get better designs from my code and the code I produce
is consistent with the advice from leaders in the field of
programming, both specific to C++ and more generally.
 
The design benefits and the quality benefits are why TDD works for me.
If you are more productive doing something else, then great for you.
My case for TDD is purely based on the pragmatic benefits I've seen
in my personal experience. That personal experience is echoed across
enough developers that it's not just a fluke that happens to work for me.
The experience is pretty consistent across languages and personality types
and so-on and this is why Uncle Bob says it is simply unprofessional if
you don't do something like TDD when you write code for a living. He
is intentionally making a provocative statement when he says that. He
is also making the argument from a pragmatic perspective; he does not
say TDD is a silver bullet, he says it is the best thing he has found
so far. If you have something better, please show it to him.
 
Am I ideologically invested in TDD? No.
 
Am I pragmatically invested in TDD? Yes.
 
Show me something better and I'll do that. Simply saying TDD sucks or
whatever isn't enough to convince me to give up the pragmatic benefits
that I have found by practicing TDD. Particularly when the thing I'm
being told to do is the thing I was already doing proficiently for decades
before I came to practice TDD. In essence, the argument is regressive
and telling me to give up the benefits I've gained from TDD.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Ian Collins <ian-news@hotmail.com>: Feb 24 01:00PM +1300

Alf P. Steinbach wrote:
> maximum chances of propagating all the way up to `main`.
 
> The main problem with that, as I see it, is that cleanup via stack
> unwinding can be dangerous when an assertion has failed.
 
Not only that, but unless you can capture enough context, it offers
little help in diagnosing the cause. I much prefer to have a decent
core dump to analyse than an exception report. I guess things are
different on platforms that lack support for cores, but even in embedded
systems, you are often better off rebooting than trying to handle an
unexpected condition.
 
The discussion should be around where to use assert and where to throw
an exception, forget about NDEBUG. I'm guessing the former is less
frequent in most code, but when it is used, the options for recovery are
sparse and the code to handle it nonexistent. So swapping an assert for
an exception in production builds may be even more dangerous.
 
My advice is to use asserts sparingly and keep them in!
 
--
Ian Collins
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 24 01:02AM +0100

On 24.02.2016 00:20, Mr Flibble wrote:
 
> Not true: std::logic_error is meant for coding errors including
> incorrect assumptions. One should derive custom exceptions from
> std::logic_error and throw these instead of asserting.
 
Well, it's certainly true that SOMEONE once meant you to use
`std::logic_error` in that way, otherwise it wouldn't be there.
 
Counting against that, when thrown from deep down in a call chain
`std::logic_error` is almost guaranteed to be caught by a general
`std::exception` handler, and this code will treat it as a failure by
the throwing code – instead of as indicating its own failure. So it's
almost sure to be handled in the Wrong Way™. I consider this, about very
concrete advantages/disadvantages, to be much more important for my
use/don't use decision than the SOMEONE's opinion somewhere pre 1998.
 
Are there other indications, beside `std::logic_error`, that the
standard exception hierarchy & machinery is not at all well designed?
 
Yes, for example that these critical exceptions can do dynamic allocation.
 
And for example, that the semantics of `throw` exception specifications
were never implemented by Microsoft and were deprecated in C++11.
 
Oh, and `std::uncaught_exception`.
 
Not to mention the extreme shenanigans one must go to, complex and
inefficient, with internal re-throwing and catching, to extract full
exception information with C++11 and later's nested exceptions.
 
Well, and more.
 
So, regarding what to use, I don't assign much weight to the SOMEONE's
authority. ;-)
 
 
- Alf
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 24 12:17AM

On 24/02/2016 00:02, Alf P. Steinbach wrote:
> almost sure to be handled in the Wrong Way™. I consider this, about very
> concrete advantages/disadvantages, to be much more important for my
> use/don't use decision than the SOMEONE's opinion somewhere pre 1998.
 
std::logic_error exceptions can be caught, logged and re-thrown
(terminating application) which is the only acceptable course of action:
logic errors are non-recoverable; if one has been detected something is
seriously amiss and to try to carry on regardless would be extremely
unwise. Education is necessary to ensure this is done by the coder:
just because some people might be handling it incorrectly doesn't mean
it is unsound advice to follow especially for new non-legacy code.
 
/Flibble
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 24 12:24AM

On 24/02/2016 00:00, Ian Collins wrote:
> different on platforms that lack support for cores, but even in embedded
> systems, you are often better off rebooting than trying to handle an
> unexpected condition.
 
If stack unwinding can be dangerous in the face of a detected logic
error (and I agree it is technically possible) then why can
std::vector::at() throw an exception in the first place? In fact why
does std::logic_error exist at all?
 
Following your logic to its ultimate conclusion there should exist a
proposal to deprecate/remove std::logic_error and exceptions derived
from it from C++.
 
/Flibble
Ian Collins <ian-news@hotmail.com>: Feb 24 01:25PM +1300

Mr Flibble wrote:
> unwise. Education is necessary to ensure this is done by the coder:
> just because some people might be handling it incorrectly doesn't mean
> it is unsound advice to follow especially for new non-legacy code.
 
Unless you control all of the code, how can you grantee that there isn't
a catch or std::exception somewhere in the call frame between your throw
and catch of std::logic_error? If (and that is a big if) you are going
to use an exception rather than an assert, it should be something
outside of the standard exception hierarchy.
 
--
Ian Collins
Ian Collins <ian-news@hotmail.com>: Feb 24 01:34PM +1300

Mr Flibble wrote:
> error (and I agree it is technically possible) then why can
> std::vector::at() throw an exception in the first place? In fact why
> does std::logic_error exist at all?
 
Because std::vector is designed to do so?
 
If for an example an invalid pointer triggers an assert, I would want to
know how this condition occurred, an assert and a core would give me the
best chance of doing so. Even if the alternative exception captured the
stack, it wouldn't help if the bad data came from another thread.
 
Use exceptions where recovery is an option, asserts where it isn't.
 
> Following your logic to its ultimate conclusion there should exist a
> proposal to deprecate/remove std::logic_error and exceptions derived
> from it from C++.
 
I think you lost the way somewhere. std::logic_error has its place, but
that place isn't as a substitute for assert.
 
--
Ian Collins
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 24 01:21AM

On 24/02/2016 00:34, Ian Collins wrote:
> best chance of doing so. Even if the alternative exception captured the
> stack, it wouldn't help if the bad data came from another thread.
 
> Use exceptions where recovery is an option, asserts where it isn't.
 
You are missing the point. If an incorrect index is being passed to
at() then it is likely that something is seriously amiss.
 
>> from it from C++.
 
> I think you lost the way somewhere. std::logic_error has its place, but
> that place isn't as a substitute for assert.
 
Yes it is.
 
/Flibble
Ian Collins <ian-news@hotmail.com>: Feb 24 02:55PM +1300

Mr Flibble wrote:
 
>> Use exceptions where recovery is an option, asserts where it isn't.
 
> You are missing the point. If an incorrect index is being passed to
> at() then it is likely that something is seriously amiss.
 
But it shouldn't be a "this can never happen and if it does, we're
screwed" condition where an assert is appropriate.
 
 
>> I think you lost the way somewhere. std::logic_error has its place, but
>> that place isn't as a substitute for assert.
 
> Yes it is.
 
Only if you control 100% of the code base and aren't too interested in
fault finding the case of the throw.
 
--
Ian Collins
"Öö Tiib" <ootiib@hot.ee>: Feb 23 06:47PM -0800

On Wednesday, 24 February 2016 03:21:56 UTC+2, Mr Flibble wrote:
 
> > Use exceptions where recovery is an option, asserts where it isn't.
 
> You are missing the point. If an incorrect index is being passed to
> at() then it is likely that something is seriously amiss.
 
That 'vector::at()' is likely for cases when out of range index can be
result of valid processing for example it was read from some potentially
dirty stream. Otherwise it is better to 'assert' and to use [].
 
 
> > I think you lost the way somewhere. std::logic_error has its place, but
> > that place isn't as a substitute for assert.
 
> Yes it is.
 
It is matter of contract of function interface. In general if caller is
supposed to avoid error situation then abort, if caller is not supposed to
avoid it (and especially when it can not avoid it) then throw.
 
Bugs must be fixed, but thrown error may lose context. It is cheaper to
analyse defect by post-mortem debugging of crash dump and throwing on
bugs just risks losing valuable context.
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 23 10:32PM

On 23/02/2016 22:27, Ian Collins wrote:
>> undetected bugs in a release build manifesting as undefined behaviour
>> that doesn't cause a straight crash but does something more serious.
 
> So don't change NDEBUG in your release builds.
 
You are assuming everyone does that: they don't.
 
/Flibble
Ian Collins <ian-news@hotmail.com>: Feb 24 11:44AM +1300

Mr Flibble wrote:
>>> that doesn't cause a straight crash but does something more serious.
 
>> So don't change NDEBUG in your release builds.
 
> You are assuming everyone does that: they don't.
 
I'm not assuming, I'm recommending.
 
--
Ian Collins
Paavo Helde <myfirstname@osa.pri.ee>: Feb 24 11:48AM +0200

On 24.02.2016 2:34, Ian Collins wrote:
 
> I think you lost the way somewhere. std::logic_error has its place, but
> that place isn't as a substitute for assert.
 
I think it depends very much on the modularity of the component which
has failed and needs to be restarted.
 
If the OS fails with a blue screen or kernel panic, then the execution
flow goes back to the hardware and the computer is restarted either
automatically or by the user.
 
If an app gets terminated via an assert, then the execution flow
basically goes back to OS and user or another program can restart the app.
 
If a component in an app fails with an assert-style exception, the
execution flow goes back to the main driver component of the app and the
user or another routine can restart the component.
 
If the component state is corrupted and restarting it does not work, the
next level (app) restart is needed.
 
If the computer state is corrupted and the app does not work after
restart, the next level (computer) restart is needed (not a rare
scenario in Windows).
 
The most tricky part is to find the correct level of restart needed at
any given situation. Assert is not always the best choice, and neither
is std::logic_error.
 
Just my 2 cents.
Ian Collins <ian-news@hotmail.com>: Feb 24 11:14PM +1300

Paavo Helde wrote:
> any given situation. Assert is not always the best choice, and neither
> is std::logic_error.
 
> Just my 2 cents.
 
A fair summary.
 
I agree with the theory, but I've seldom seen component restarts
implemented well in practice! Most applications are monolithic, so
there is nothing to restart. Long running threaded applications are a
good candidate for trapping errors with exceptions and restarting a
thread while multi-process applications are a good candidate for
asserts. Most long running applications I've seen (such as system
services) are happy to abort and rely on what ever started them to do
the restart. It is often safer and no doubt easier to code them this way.
 
--
Ian Collins
mark <mark@invalid.invalid>: Feb 24 02:43PM +0100

On 2016-02-24 01:00, Ian Collins wrote:
 
> My advice is to use asserts sparingly and keep them in!
 
IMO, multiple levels of asserts should be used; release asserts that are
always active and multiple debug assert versions that perform checks too
expensive for release builds.
 
Debug asserts should be used extensively.
"Öö Tiib" <ootiib@hot.ee>: Feb 24 05:52AM -0800

On Wednesday, 24 February 2016 11:48:43 UTC+2, Paavo Helde wrote:
 
> If a component in an app fails with an assert-style exception, the
> execution flow goes back to the main driver component of the app and the
> user or another routine can restart the component.
 
There is a component (that throws exceptions about being buggy). The
memory of whole process is apparently shared (or how else it did throw
exceptions) and so whatever corrupted component's state possibly
corrupted something else alike. That makes restart of whole application
likely safer on case of "I'm insane" exception.
 
If the component is implemented as separate process then there is
memory access barrier and so it can't throw exceptions anyway.
"Öö Tiib" <ootiib@hot.ee>: Feb 24 06:22AM -0800

On Wednesday, 24 February 2016 15:43:46 UTC+2, mark wrote:
 
> IMO, multiple levels of asserts should be used; release asserts that are
> always active and multiple debug assert versions that perform checks too
> expensive for release builds.
 
Assert typically just checks integrals or pointers or flags. That can be
done billions of times during second. When assert is seriously affecting
performance of optimized release build then it likely runs complicated
checks (that may be also defective and so should contain asserts) and
the debug build is likely inhumanly torturous to test thanks to such.
Better reconsider such asserts.
 
 
> Debug asserts should be used extensively.
 
About 5-10% of whole product code base is executed during 98% of profile.
So it is likely better idea to use real asserts extensively and replace
those with debug asserts only in that 5-10% of code and only when
profiling of product suggests to.
M Philbrook <jamie_ka1lpa@charter.net>: Feb 23 08:39PM -0500

In article <If2zy.23937$Nf2.15642@fx14.iad>,
bitrex@de.lete.earthlink.net says...
> frac_low : 8;
 
> };
> } phase_accumulator;
 
sumg ting does not look correct? Assuming I understand what you're
after.
 
struct _phase_acc {

union { uint32_t phase_accumulator_full };
union { uint_8 integer_high:8,
integer_low:8,
frac_high:8,
frc_low : 8;
};
} phase_accumulator;
 
You can include a condition to test for the second union to
determine the endious of the platform to correctly compile
the code for the bit field order.

The above struct should overlay to generate a single 32 bit image.
 
Jamie
Ian Collins <ian-news@hotmail.com>: Feb 24 02:48PM +1300

M Philbrook wrote:
> after.
 
> struct _phase_acc {
 
> union { uint32_t phase_accumulator_full };
 
typo..
 
> union { uint_8 integer_high:8,
 
typo..
> determine the endious of the platform to correctly compile
> the code for the bit field order.
 
> The above struct should overlay to generate a single 32 bit image.
 
No, it won't. You have two 32 bit unions members of a struct, witch
gives a 64 bit struct.
 
--
Ian Collins
Tauno Voipio <tauno.voipio@notused.fi.invalid>: Feb 24 09:54AM +0200

On 23.2.16 23:53, Tim Wescott wrote:
> static inline uint8_t frac_low (uint32_t x) {return (x >> 0) & 0xff;}
 
> This will be portable, and integer_high(phase_accumulator) should read as
> easily (or more so) as phase_accumulator.bits.integer_high.
 
 
A vote on this: At least GCC optimizes these to single bit field
extract instructions on a Cortex.
 
Alternative format of the same:
 
static inline uint8_t integer_high(uint32_t x)
{
return (uint8_t)(x >> 24);
}
 
This should work correctly even without the cast.
 
--
 
-TV
Christian Gollwitzer <auriocus@gmx.de>: Feb 24 09:44AM +0100

Am 23.02.16 um 23:48 schrieb David Brown:
> volatile is relevant. Using the bitfields, accesses to the separate
> fields will be done as 8-bit accesses - using the functions, they will
> be 32-bit accesses.
 
Are you sure? Have you checked the assembly output for this code? For
non-volatile variables at least, I expect that these inline functions
are optimized to the same instructions. I don'T have an ARM compiler
handy, but maybe you could check if it also works with "volatile unit32_t x"
 
> write versions may be significantly slower.
 
> Fourth, your functions are ugly in use, especially for setting:
 
> phase_accumulator = set_integer_low(phase_accumulator, y)
 
If C++ is used, an inline function with a reference parameter is also
easily optimized:
 
static inline void set_integer_low (uint32_t x&, uint8_t y) {
x= (x & 0xff00ffff) | ((uint32_t) y << 16);
}
 
set_integer_low(phase_accumulator, y);
 
I'm not so sure about a pointer version for pure C
 
static inline void set_integer_low (uint32_t *x, uint8_t y) {
x= (x & 0xff00ffff) | ((uint32_t) y << 16);
}
 
set_integer_low(&phase_accumulator, y);
 
I think the OP should check the assembly output of an optimized build to
decide.
 
> your IDE can quickly give you assistance at filling out the field names,
> and your debugger can parse the data. With manual shift-and-mask
> systems you lose all of that.
 
That is a valid reason. But I think that even in pure C, you could get a
warning if you make a typedef unit32_t mybitask; or by embedding it into
a struct with just one member
 
typedef struct { unit32_t bits; } mybitmask;
 
Christian
David Brown <david.brown@hesbynett.no>: Feb 24 10:39AM +0100

On 24/02/16 09:44, Christian Gollwitzer wrote:
> are optimized to the same instructions. I don'T have an ARM compiler
> handy, but maybe you could check if it also works with "volatile
> unit32_t x"
 
Yes, I have checked - though not for a wide variety of versions,
optimisation flags, etc. The inline functions will result in the same
ARM code in the non-volatile case (assuming at least -O1 optimisation).
But in the volatile case, the compiler correctly generates different
code for the structure access and the manual bitfield extraction.
 
> x= (x & 0xff00ffff) | ((uint32_t) y << 16);
> }
 
> set_integer_low(phase_accumulator, y);
 
It is still a mess compared to the bitfield struct access, and it is
still different in the volatile case. Yes, the compiler can optimise
the ugly, unmaintainable and error-prone source code into good object
code - but why not just write clear, simple source code that gets
optimised into good object code?
 
> x= (x & 0xff00ffff) | ((uint32_t) y << 16);
> }
 
> set_integer_low(&phase_accumulator, y);
 
I would expect the compiler to manage this optimisation.
 
 
> I think the OP should check the assembly output of an optimized build to
> decide.
 
I think the OP should look first and foremost at the /source/ code, and
decide from that. The code generation details are up to the compiler -
enable optimisation, give it clear and correct source code, and let it
do its job. It is only in very rare situations that the generated code
should be the guide for a decision like this.
 
> warning if you make a typedef unit32_t mybitask; or by embedding it into
> a struct with just one member
 
> typedef struct { unit32_t bits; } mybitmask;
 
In both C and C++, you have to embed the item in a struct - "typedef"
itself does not introduce a new type. Yes, this technique can make the
code safer.
 
But you are going round in circles, writing lots of extra source code,
writing access functions with potentially error-prone masks (imagine a
case where the bitfields were not as simple, or where they were changed
later), with duplicate information in multiple places, and finally
resulting in some unnecessarily verbose and ugly usage.
 
In the end, all you have done is duplicate a feature that C and C++
support as part of the language. It is a pointless exercise.
David Brown <david.brown@hesbynett.no>: Feb 24 10:41AM +0100

On 23/02/16 20:17, bitrex wrote:
> frac_low : 8;
 
> };
> } phase_accumulator;
 
I should have mentioned this before, but in the case where your bitfield
sizes match the type, you don't need the bitfield specifiers:
 
typedef union {
uint32_t phase_accumulator_full;
struct {
uint8_t integer_high;
uint8_t integer_low;
uint8_t frac_high;
uint8_t frac_low;
};
} phase_accumulator;
 
(Of course, the byte ordering is still wrong - I just wrote it that way
to keep it consistent.)
"Alf P. Steinbach" <alf.p.steinbach+usenet@gmail.com>: Feb 23 10:26PM +0100

On 23.02.2016 20:41, bartekltg wrote:
 
> Does that mean complex.h and ccomplex (I have pasted fragments)
> from new gcc are broken according to the standard?
 
No, it means I posted some rubbish, sorry. Once I realized it, after a
second or two, I posted a follow-up to clarify that, but Usenet is a bit
asynchronous.
 
It would have been a good posting about C++03, and about the change of
the general rules in C++11, except that C++03 didn't have any <ccomplex>
or <complex.h> headers, only the C++ <complex> header.
 
C++03 provided the following C compatibility headers, based on C89:
 
<assert.h> <iso646.h> <setjmp.h> <stdio.h> <wchar.h>
<ctype.h> <limits.h> <signal.h> <stdlib.h> <wctype.h>
<errno.h> <locale.h> <stdarg.h> <string.h>
<float.h> <math.h> <stddef.h> <time.h>
 
*But*, with C++11 the C library compatibility was upgraded to track the
C evolution, so it was now specified in terms of C99 instead of C89/90.
 
And in C99 a new keyword was introduced, `_Complex`, used to form type
specifications that are just syntactically invalid when interpreted as
C++. Also a new header <complex.h> was introduced in C99, using those
C99 complex number types. With declarations like
 
double complex cacos(double complex);
 
where C99 `complex` is a macro required to be defined as `_Complex`, and
where this macro would wreak havoc with use of C++03's `complex`.
 
So, C++11 instead offers binary level compatibility for its `complex`
class template, in its new §26.4/4 which you quoted.
 
Anyway, coffee helps a little, and thanks for pointing out the new C++11
headers <ccomplex> and <complex.h>!
 
Since I learned something today I can pride myself on still not being
entirely senile, yay! :)
 
 
Cheers, & thanks,
 
- Alf
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: