Tuesday, June 5, 2018

Digest for comp.lang.c++@googlegroups.com - 22 updates in 2 topics

legalize+jeeves@mail.xmission.com (Richard): Jun 04 07:45PM

[Please do not mail me a copy of your followup]
 
cross@spitfire.i.gajendra.net (Dan Cross) spake the secret code
 
>The bottom line: unit testing seems to sit at a local maxima on
>the cost/benefit curve [...]
 
Uh.... citation needed.
 
Seriously.
 
There are actual academic studies among programmers doing comparable
work with and without TDD. The ones using TDD got to completeness
first faster on average than those that didn't use TDD.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
scott@slp53.sl.home (Scott Lurndal): Jun 04 07:50PM


>There are actual academic studies among programmers doing comparable
>work with and without TDD. The ones using TDD got to completeness
>first faster on average than those that didn't use TDD.
 
Uh..... citation needed.
 
Seriously.
Juha Nieminen <nospam@thanks.invalid>: Jun 05 05:46AM

> Reducing encapsulation in the process. I have said before that TDD
> is the enemy of encapsulation and therefore by extension an enemy of
> good design as encapsulation is an important design principle.
 
Not really. Sometimes it can increase encapsulation and abstraction, in a
potentially positive way.
 
For example, you might have a class that takes a std::ostream or FILE*
directly. However, for TDD you usually need to replace that with a more
abstract custom wrapper class. Which in the end increases abstraction
and modularity, and might be beneficial if you ever need the class to
write its result to something else than a stream.
Juha Nieminen <nospam@thanks.invalid>: Jun 05 05:52AM

> required result occurred. If execution of a private member function has
> no testable consequences, then that private member function isn't
> actually needed.
 
The private implementation of a class may be quite extensive and complex,
and the whole idea of TDD is to write unit tests for every single small
component and function. If you have a test for something that involves
dozens of functions and data structures, it makes it more difficult to
find where exactly the problem might be, in the case of a failed test.
 
The whole idea of TDD is to make it more obvious where the bug might be,
which is why unit tests are written for every single function and
component.
 
As a hypothetical example, suppose you have a class that can write and
read JPEG files. It only has a minimum public interface to do that, and
pretty much nothing else, but the implementation inside is really huge.
So you write a unit test that writes a trivial image, and then tries to
read it again and compare the result, and it turns out it's different.
Great, you caught a bug. But where is it? The class might consist of
thousands of lines of code, and hundreds of functions and data
structures. None of which ought to be exposed to the outside.
Ian Collins <ian-news@hotmail.com>: Jun 05 06:44PM +1200

On 05/06/18 06:01, Dan Cross wrote:
> long-term maintenance). Tests can certainly perform some
> documentary function, but they fall far short of the sort of
> design that may be required for non-trivial software.
 
Agreed, non-trivial software tends to have layers of design, just like
it has layers of testing.
 
> poor way to communicate a description of the protocol sufficient
> for an implementer; I would argue it would be so opaque as to be
> insufficient.
 
Agreed.
 
> without special training; testing provides a nice balance as a
> platform for shaking a lot of bugs out of the tree. This is
> presumably why it is so popular.
 
Not just special training, but also more time. The sooner something is
tested after it is written, the sooner a defect can be rectified.
 
> of a unit test? Or perhaps I need some guarantee of time
> complexity for an algorithm as applied to some user-provided
> data. Unit testing is awkward at best for this purpose.
 
Agreed, unit tests test the functionality of an algorithm, not its
performance. They do aid the task of performance driven internal
refactoring. You have the confidence to tune having minimised the risk
of breaking the functionality.
 
> the practice of TDD. The former is undeniably useful; but the
> latter is only useful as a means to the former and it is
> trivially not the only such means.
 
It is a means that makes unit tests more enjoyable to write. Most
programmers I know find writing tests after the fact tedious in the
extreme, especially so if the code has to be modified to facilitate
testing.
 
 
> Proponents of TDD in the Martin, Beck, Jeffries sense really do
> try and push it too far. I wrote about this some time ago:
> http://pub.gajendra.net/2012/09/stone_age
 
One thing I have learned since picking up TDD form Beck in the early
2000s it to be pragmatic! Pushing too hard in a new environment is a
short cut to failure..
 
> that they do not believe it themselves, and are only saying so
> because they are (most commonly) consultants and authors who are
> trying to sell engagements and books.
 
I'll have to take your work on that, the agile process I most closely
follow, XP, certainty does include acceptance testing. I've never
released a product without it.
 
> this that's unsettling: it's the sort of fervent devotion of the
> converted, who claim that, "if you just genuflected THIS way..."
> then all would be well.
 
I'm averse to cults...
 
> quality results. However, unit testing is insufficient to
> ensure that software is correct and responsive in all respects
> and TDD, in particular, seems like something of an oversold fad.
 
That part I strongly dispute.
 
Unit testing offer significant cost benefits, mainly from times saved in
identifying and fixing defects. Fads don't last 15+ years...
 
--
Ian.
cross@spitfire.i.gajendra.net (Dan Cross): Jun 05 02:19PM

In article <fnmpqhF80svU5@mid.individual.net>,
 
>That part I strongly dispute.
 
>Unit testing offer significant cost benefits, mainly from times saved in
>identifying and fixing defects. Fads don't last 15+ years...
 
Hmm, this is the second time that someone's commented on on this
part of my post in a way that makes it sound as if I'm saying
that *unit testing* is not worthwhile.
 
On re-reading my post, it seems what I wrote about *unit
testing* is backwards of what I meant: in particular, when I
wrote that "unit testing seems to sit at a local maxima on the
cost/benefit curve", what I probably should have written,
"benefit/cost curve." That is, unit testing seems to give you
the largest benefit for the least cost in some local part of
the curve that sits very near where most programmers work.
 
My apologies if that wasn't clear.
 
That said, I have yet to see a strong argument that TDD is the
most efficient way to get a robust body of tests.
 
On a side note, I'm curious what your thoughts are on this:
https://pdfs.semanticscholar.org/7745/5588b153bc721015ddfe9ffe82f110988450.pdf
 
- Dan C.
cross@spitfire.i.gajendra.net (Dan Cross): Jun 05 02:23PM

In article <b1b6d2f3-f4cd-4a66-89c8-a183e2067e02@googlegroups.com>,
>of communicating with him on various usenet groups since his C++ Report
>days, I don't believe that he would express an opinion about any matter that
>he didn't actually hold.
 
I don't know him, save for having read several of his books. I
trust your judgement of the man, though I disagree with him on
many technical points. The few times I've sent comments his way
(all of technical things) he's not responded.
 
- Dan C.
legalize+jeeves@mail.xmission.com (Richard): Jun 05 04:25PM

[Please do not mail me a copy of your followup]
 
Ian Collins <ian-news@hotmail.com> spake the secret code
 
>That part I strongly dispute.
 
>Unit testing offer significant cost benefits, mainly from times saved in
>identifying and fixing defects. Fads don't last 15+ years...
 
Indeed.
 
I don't know if Uncle Bob's strident position on TDD brings over
more people than it alienates. In my personal observation, it does
both and which outcome you get seems more correlated with the person
reacting to Uncle Bob's approach more than anything else. Speaking
for myself, I simply state the benefits that I personally have
obtained from TDD and recommend it for those reasons. For many years
(20+) I programmed without TDD and was good at it. In the years since
adopting TDD (10+), I've found myself more productive than I was
without TDD.
 
In the end, it's simply a recommendation. Any reader is free to
ignore it if they wish. If they want to learn how to do TDD
effectively, I'm happy to show them in workshops, etc.
 
What has consistently surprised me over the years since adopting TDD
is the number of people who feel compelled to erect straw man
arguments against TDD in order to "prove" that it is a waste of time.
 
In reading through some of the links on this thread, I did find it
incredibly amusing to read how Ron Jeffries blundered through a Sudoku
solver and managed to get sidetracked on a tangent without ever
completing a finished solver. I've bumped into him once or twice on
some mailing lists and found that I didn't really agree with him on
things and so I wasn't surprised he floundered. Maybe that is just my
own confirmation bias, or a little schadenfreude on my part.
 
I don't take this example as a refutation of TDD, simply as more data
that Ron Jeffries can be an idiot sometimes.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
legalize+jeeves@mail.xmission.com (Richard): Jun 05 04:27PM

[Please do not mail me a copy of your followup]
 
cross@spitfire.i.gajendra.net (Dan Cross) spake the secret code
>wrote that "unit testing seems to sit at a local maxima on the
>cost/benefit curve", what I probably should have written,
>"benefit/cost curve."
 
You do realize that if you invert the ratio, it completely reverses
the original point?
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Rosario19 <Ros@invalid.invalid>: Jun 05 08:18PM +0200

On Sun, 3 Jun 2018 08:22:31 +1200, Ian Collins wrote:
 
 
>True.
 
>https://www.thinksys.com/development/tdd-myths-misconceptions/
 
>Point 3.
 
i'm no one,i wrote nothing good etc, but i think difensive programming
is the way
so: write one function full of check, even for thing never would be
happen too
cross@spitfire.i.gajendra.net (Dan Cross): Jun 05 06:14PM

>>"benefit/cost curve."
 
>You do realize that if you invert the ratio, it completely reverses
>the original point?
 
Yes, hence my clarifying response. People make such mistakes in
this kind of information communication all the time and given
the context of the rest of my post (and even that paragraph),
I'm a bit surprised that you and Ian didn't pick up on the
intent; others seemed to grok it and how one interprets it is,
after all, a sign convention.
 
But that's neither here not there.
 
To put it more succinctly: unit testing is good, as it's a low
cost way to get relatively high quality results. But it has its
limitations and shouldn't be regarded as more than one tool out
of a box of many. The value of TDD is far more subjective and
largely unrelated to the value of unit testing, though the two
are often incorrectly conflated. Many of the most strident
proponents of TDD overstate its usefulness and attempt to jam it
into situations it is not well suited for: it is no substitute
for design or algorithm analysis, both of which it is (in my
opinion) poorly suited for.
 
- Dan C.
Rosario19 <Ros@invalid.invalid>: Jun 05 08:22PM +0200

On Tue, 05 Jun 2018 20:18:41 +0200, Rosario19 wrote:
 
>is the way
>so: write one function full of check, even for thing never would be
>happen too
 
what happen to be now is 2 fase programming
 
1) write all as i think would be ok, without any testing if not only
compile that code (with code check itself for see if argmìument are
all ok and some check inside)
 
2) debug the code with a debugger and adgiust all, until it seems run
ok
James Kuyper <jameskuyper@alumni.caltech.edu>: Jun 05 02:36PM -0400

On 06/05/2018 01:52 AM, Juha Nieminen wrote:
> component and function. If you have a test for something that involves
> dozens of functions and data structures, it makes it more difficult to
> find where exactly the problem might be, in the case of a failed test.
 
Tests are for determining whether requirements are being met. Use a
debugger when the goal is to determine why a requirement is not being
met, and private member functions are no barrier to using a debugger for
that purpose.
 
> The whole idea of TDD is to make it more obvious where the bug might be,
> which is why unit tests are written for every single function and
> component.
 
I don't know enough about TDD to address that claim, I was only
addressing the claim that you have to violate the privacy of private
member functions to test whether code violates it's requirements.
legalize+jeeves@mail.xmission.com (Richard): Jun 05 07:47PM

[Please do not mail me a copy of your followup]
 
cross@spitfire.i.gajendra.net (Dan Cross) spake the secret code
>cost way to get relatively high quality results. But it has its
>limitations and shouldn't be regarded as more than one tool out
>of a box of many.
 
I can sign up to that; we're in agreement here.
 
>The value of TDD is far more subjective and
>largely unrelated to the value of unit testing, though the two
>are often incorrectly conflated.
 
I'd agree with that too. I would add that unit testing falls out
naturally from TDD, while my other attempts to write unit tests
resulted in really fragile tests and other things that left me feeling
like the unit test was just a "tax" on development without providing
me any benefit.
 
>into situations it is not well suited for: it is no substitute
>for design or algorithm analysis, both of which it is (in my
>opinion) poorly suited for.
 
Can you show me a TDD proponent that *is* saying it is a substitute
for design or algorithm analysis?
 
I've been around many TDD proponents and none of them have ever said
this.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
legalize+jeeves@mail.xmission.com (Richard): Jun 05 07:49PM

[Please do not mail me a copy of your followup]
 
James Kuyper <jameskuyper@alumni.caltech.edu> spake the secret code
 
>I don't know enough about TDD to address that claim, I was only
>addressing the claim that you have to violate the privacy of private
>member functions to test whether code violates it's requirements.
 
The general idea around TDD making it more obvious where a bug is
located is that in each cycle you start from a known working
implementation -- to the extent that it is responding to existing
tests. When you write a new test and make that pass, the amount of
code that has changed in your implementation is generally very small,
so pinpointing the location of any newly introduced mistake is fairly
trivial.
 
This is the main benefit I get from TDD; the time between making the
bug and finding/fixing the bug is reduced to seconds instead of days,
weeks, months, years, or possibly never.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Vir Campestris <vir.campestris@invalid.invalid>: Jun 05 11:02PM +0100

On 05/06/2018 20:49, Richard wrote:
> This is the main benefit I get from TDD; the time between making the
> bug and finding/fixing the bug is reduced to seconds instead of days,
> weeks, months, years, or possibly never.
 
I appreciate the value of unit tests. Where I disagree with Ron
Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
where you have to have absolute faith that your units tests will catch
every bug.
 
Which I do not believe is possible, especially for timing problems (HW
interfaces and multi-threading).
 
I'm looking at a bug at the moment that fails one time in three.
 
I once had a bug that hung around for 18 months. Once we understood it
we could set up the correct environment, and it would fail most days.
Not every day though. No reasonable unit test can be run that long.
 
Andy
--
If you're interested: Code was a disc interface. It went:
 
Main code:
Start transfer;
Start timer;
Wait for completion.
 
Disc interrupt routine:
stop timer;
wake main code.
 
Timer expiry routine:
Set error flag.
 
The error path was to interrupt the main code on a scheduler interrupt
after starting the transfer, so that it did not get control back until
the IO had completed. The timer was then started after it should have
been stopped, and left running. If there were no other IOs to reset the
timer before it expired the error flag was set - and the _next_ transfer
failed. And the fix was to start the timer before the transfer. Darnn,
that was 30 years ago and I still recall it clearly!
legalize+jeeves@mail.xmission.com (Richard): Jun 05 10:24PM

[Please do not mail me a copy of your followup]
 
Vir Campestris <vir.campestris@invalid.invalid> spake the secret code
>Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
>where you have to have absolute faith that your units tests will catch
>every bug.
 
In every incarnation of XP that I've worked in/discussed with colleagues,
unit testing was not the only level of automated testing. There were
also acceptance tests at a minimum and usually some sort of performance
or resource consumption oriented test. Human-driven exploratory testing
would also be used for things like usability. Leave the repetitive
stuff to the computer and allow your human testers to test the stuff
that computers aren't good at.
 
I've never seen a presentation by an agile advocate that asserted that
unit tests alone were enough. "Growing Object-Oriented Software,
Guided by Tests" by Steve Freeman and Nat Price <https://amzn.to/2JnRruf>
makes it quite clear through all the worked examples that you need
acceptance tests to push "from above" and unit tests to push "from below",
to squish bugs out in the middle. This is one problem with the small
examples of TDD that are used to demonstrate the practice -- they tend to
be so small that only unit tests are shown and no discussion of acceptance
tests are brought into the picture, leaving one with the false impression
that TDD is only about unit tests. If you haven't read the above book,
I would recommend it.
 
>Which I do not believe is possible, especially for timing problems (HW
>interfaces and multi-threading).
 
Timing and synchronization stuff is notoriously hard to test; it
exhibits the "heisenbug" quality in automated test suites and it too
tedious and boring to probe manually.
 
>I once had a bug that hung around for 18 months. Once we understood it
>we could set up the correct environment, and it would fail most days.
>Not every day though. No reasonable unit test can be run that long.
 
Agreed. This would fall under the category of "system test" at places
where I've worked.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
Lynn McGuire <lynnmcguire5@gmail.com>: Jun 04 04:13PM -0500

On 6/1/2018 8:11 AM, Scott Lurndal wrote:
...
>> Only few rocket scientists had enough storage for comments so most
>> code was write-only.
 
> Complete nonsense.
 
We used the UCS Univac 1108 for software development until 1978 when we
got a Prime 450. We could barely afford to keep the code online on
those old drum drives, much less the comments. So, no comments in our
source code until 1978. And those comments sucked.
 
I would love for our code from back then to have four letter variable
names. We generally have:
T = temperature
P = pressure
V = vapor
L = liquid
B = hydrocarbon liquid
I = integer iterator
 
Lynn
David Brown <david.brown@hesbynett.no>: Jun 05 09:35AM +0200

On 04/06/18 22:49, Scott Lurndal wrote:
> avoid function calls in the 70's due to the perceived expense. Nor
> were there pipelines or, in general, software-visible caches in the
> 70's.
 
I had very limited computing experience in the 70's, certainly not of
big computers.
 
 
> Functions were not at all uncommon in Fortran, COBOL or the
> many algol and algol-like languages in that era, even BASIC
> heavily used functions (GOSUB).
 
Oh, certainly functions are heavily used, and have been since the
invention of the callable subroutine. Just because they can, in some
circumstances, be costly does not mean that they are not worth the cost.
(And they are not always slower than inlining - the benefits of cache,
pre-fetches, branch caches, etc., combined with hardware scheduling can
mean that even having lots of small functions can be faster in some cases.)
Jorgen Grahn <grahn+nntp@snipabacken.se>: Jun 05 08:18AM

On Mon, 2018-06-04, Scott Lurndal wrote:
 
>>The branch instruction is a small part of the cost.
 
> Aside from any cache related costs (not a problem in the 70's), what
> additional costs are there in your concept of 70's functions?
 
I was thinking about saving and restoring registers, and inability to
optimize across function boundaries.
 
But to be honest I thought more of the 1980s and the Motorola 68000.
I never meant to claim to know anything about what programmers did in
the 1970s.
 
/Jorgen
 
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
scott@slp53.sl.home (Scott Lurndal): Jun 05 02:00PM

>> additional costs are there in your concept of 70's functions?
 
>I was thinking about saving and restoring registers, and inability to
>optimize across function boundaries.
 
Many of the machines of that era were memory-to-memory architectures
with only an accumulator and a couple of index registers or auto-increment
locations in low memory (e.g. pdp-8,
burroughs mainframes) or stack based (burroughs large systems, HP-3000).
 
In many cases, function weren't re-entrant (e.g. the PDP-8 again, because
the return address is stored in the first word of the function, or the
Burroughs systems were system call parameters were in the code stream
following the system call instruction).
Vir Campestris <vir.campestris@invalid.invalid>: Jun 05 09:41PM +0100

On 04/06/2018 21:46, Scott Lurndal wrote:
>> twice as fast as a single CPU machine.
 
> A rare machine indeed, with both cache and SMP. The B2900 (Burroughs)
> had neither.
 
Ah, I meant an ICL 2900. I didn't know about the Burroughs one.
 
Andy
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com.

No comments: