- Unit testing (Re: Putting project code in many different files) - 12 Updates
- TDD considered harmful - 9 Updates
- Name for a kind of function - 2 Updates
- 'int' considered dangerous (reprise) - 2 Updates
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 10:39AM -0500 On 1/31/2016 11:38 PM, Ian Collins wrote: > possible, if the take too long to run, you will back off running them. I > develop with "make" = "make test", so my unit tests are run for ever > code change. I realize the difference, all right. What you don't recognize is the importance of independent testing. No, you don't what unit tests to stress test your development system. That's why you have a test group and test system(s). Additionally, the test group can test under many more different conditions. Your example of an out of memory exception is good. The developer is responsible for making something work and may not even consider that possibility, not code for it and not test it. But the test group is responsible for trying to break things, and will look for ways to make it fail. They will often test for things the developer never considered. >> systems. Nothing about embedded systems has been mentioned until you >> brought it up. But I've seen you're good at trying to change the topic. > That's a poor strawman, there was no change in topic. I'm not the one trying to change the topic. No one mentioned embedded systems until you did. > "especially if you are unit testing embedded code" not "only if you are > unit testing embedded code". Embedded code is simple an extreme example > of something you wouldn't unit test on target. Embedded code is something I definitely WOULD test on target. And it would be the responsibility of the test group to do the testing. > From the perspective of software other than its driver, a device is a > class other code. Device driver tests use mock hardware, the code that > uses driver class mocks the driver class and so on up the stack. A hardware device is noting like a class. The device has specific responses for specific commands, including error conditions. These are well defined by the manufacturer. I write device drivers (working on some code which includes one right now). Device driver tests use real hardware. I do mock the hardware driver when working on code using the device; for instance, the current code I am working on is ARM based and fetches two bytes from a specified sensor on request. This is easy to mock because the interface is quite simple, and allows me to do some development on my laptop. But testing is still done on the device, by the test group. And the real device driver is written and tested on the device. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 10:50AM -0500 On 2/1/2016 2:52 AM, Paavo Helde wrote: > example, if after some unrelated change 10 tests start to fail, among > them the tests for A and B, then it makes sense to start with B and its > test as B is the simplest class and likely the core of the issue. Having to trace through multiple classes is a waste of good programmer (or test group) time. The faster the problem can be identified and fixed, the more productive the programmers are. But unrelated changes should not affect other units. If they do, the changes are not unrelated. You have to determine what the relationship is. It might have nothing to do with B. > The code is changing every day because of the changing requirements, > changing priorities, changing environment, etc. You cannot design > everything up-front. Yes, you can. And in fact, there is a huge advantage to doing it. When the requirements change (and yes, they will), you can look at the design documentation and know exactly what will be affected. You can then concentrate on those areas. It means fewer bugs because you don't miss something, and less wasted time because you aren't looking at things which don't need to be changed. > change, adapt and refactor the code base without the fear of introducing > new bugs. Ideally, the codebase would become wax instead of layers of > petrified sediments. It is very easy to reshape wax. But every time you change code, you introduce the possibility of one or more errors. Additionally, the design documentation provides the information for the test group to create the tests, and when the documentation changes, the test group knows what changes need to be made to the tests. > Agreed - if you can get the design 100% right before starting the coding > it would be perfect. However, in my experience this rarely happens, and > even then there would be later changes because of unforeseeable factors. You can get the design right, but it takes practice. You have to know the right questions to ask to get the detailed specifications necessary to create the design. And more often than not, you have to train the customer in how to define the specifications. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 10:59AM -0500 On 1/31/2016 11:48 PM, Ian Collins wrote: > You can (and thousands of us do) because it *pretends* to implement the > specification. I suggest you do some research on mock objects and how > they are used. I don't need to do any research on it. I've seen it tried. All it has done is waste time and unnecessarily create bugs because the mock classes cannot, by definition, completely follow the specification. Your claim there are thousands of you doing it wrong is immaterial. >> unless you process that file. > Quite - because you aren't testing it! You are testing the code that > process the resulting object. And you can't test the resulting object because a mock class never can duplicate the behavior of the real class. It can only do some of it. And that means potential problems are missed. >> possible results. Such a process is always prone to errors. > You've lost the plot, *we are not testing the file processing*. We are > testing the processing of the result of the file processing. And you can't be sure the output you get from the mock class is the same as what real files will provide in all cases. Only in some cases. And since other cases can't be tested, you leave the possibility of bugs (unnecessarily). Let's go back to your memory problem. What if processing the input file causes an out of memory condition? Does your mock class emulate that? Or what if data is missing, out of order, or not what was expected? Or the file is truncated? Or any of a number of things a proper test suite would check? > Indeed it does, but not unit testing something that has no interest in > how those files are processed. > I've said it before but it bears repeating: layered testing. Yes, you have. Now if you'd only learn how to do it correctly. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 11:02AM -0500 On 2/1/2016 5:31 AM, David Brown wrote: > practically possible (using TDD, unit tests, automated test benches, > simulations, whatever) to ensure there are no known bugs and their tests > are as complete as possible. Too bad. The systems I've worked on pass code onto the test group quite early and regularly. Developer's time is too valuable to be spent testing code. That's not to say the developers never do any tests on their code; just minimal - enough to see that it does what they expected, anyway. Then the code goes to the test group who's responsibility to make it fail. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Paavo Helde <myfirstname@osa.pri.ee>: Feb 01 06:14PM +0200 On 1.02.2016 17:50, Jerry Stuckle wrote: > But every time you change code, you introduce the possibility of one or > more errors. If you have a proper test suite it will catch all such errors. Note that if you are afraid of changing code because it might break something, then you cannot do code refactoring and code redesign. This means that given enough time the codebase will turn unmanageable, regardless of how good the design and specifications. |
Geoff <geoff@invalid.invalid>: Feb 01 10:03AM -0800 On Sun, 31 Jan 2016 23:16:32 -0500, Jerry Stuckle >recalls, either - and are quite stringent in their testing. >Neither are any of the other companies I have consulted for - both as a >developer and a project manager. That must be why IBM software is so reliable and never needs bug fixes. :) |
"Öö Tiib" <ootiib@hot.ee>: Feb 01 10:53AM -0800 On Monday, 1 February 2016 20:03:54 UTC+2, Geoff wrote: > >developer and a project manager. > That must be why IBM software is so reliable and never needs bug > fixes. :) Somehow when IBM and software are in same sentence then only horror comes to mind ... Rational Rose, ClearQuest, Lotus Notes, WebSphere ... crazy stuff done with apparently zero self-awareness whatsoever. |
Ian Collins <ian-news@hotmail.com>: Feb 02 08:30AM +1300 Jerry Stuckle wrote: >> code change. > I realize the difference, all right. What you don't recognize is the > importance of independent testing. So you claim, but you continue (is is it choose?) to a display a basic ignorance of the practice. > No, you don't what unit tests to stress test your development system. > That's why you have a test group and test system(s). Additionally, the > test group can test under many more different conditions. Where did I say otherwise? Exact quote please. > possibility, not code for it and not test it. But the test group is > responsible for trying to break things, and will look for ways to make > it fail. They will often test for things the developer never considered. Where did I say otherwise? Exact quote please. >> That's a poor strawman, there was no change in topic. > I'm not the one trying to change the topic. No one mentioned embedded > systems until you did. Please explain how mentioning embedded systems changed the topic. >> of something you wouldn't unit test on target. > Embedded code is something I definitely WOULD test on target. And it > would be the responsibility of the test group to do the testing. Here again you choice to ignore the fact that there is more than one level of testing. It's clear that you didn't do the homework. >> class other code. Device driver tests use mock hardware, the code that >> uses driver class mocks the driver class and so on up the stack. > A hardware device is noting like a class. You appear to be have trouble with your reading comprehension, so I'll say it again: "From the perspective of software other than its driver, a device is a class other code". There, is that clear now? > The device has specific > responses for specific commands, including error conditions. Do you write code that has indeterminate responses for specific requests? I hope not. > These are > well defined by the manufacturer. Do you write code without a specification? No, you have requirements that are well defined by the customer. > bytes from a specified sensor on request. This is easy to mock because > the interface is quite simple, and allows me to do some development on > my laptop. Ah, so it is simply a class other code. You appear to be contradicting yourself.... > But testing is still done on the device, by the test group. > And the real device driver is written and tested on the device. ...as well as continuing the 80s practice of lobbing code over the wall to someone else to test. -- Ian Collins |
Ian Collins <ian-news@hotmail.com>: Feb 02 08:44AM +1300 Jerry Stuckle wrote: > I don't need to do any research on it. I've seen it tried. All it has > done is waste time and unnecessarily create bugs because the mock > classes cannot, by definition, completely follow the specification. So you've tried it, done it wrong and given up. Too bad. > And you can't test the resulting object because a mock class never can > duplicate the behavior of the real class. It can only do some of it. > And that means potential problems are missed. I can't? Bugger, what have I been doing all these years? of course I can. > as what real files will provide in all cases. Only in some cases. And > since other cases can't be tested, you leave the possibility of bugs > (unnecessarily). Can't I? The transformation will be well defined and will be testing in the real classes tests. Ever heard of a contract? > Let's go back to your memory problem. What if processing the input file > causes an out of memory condition? Does your mock class emulate that? Yes, I would simply write something like: test::FOO_decode::willThrow( std::bad_alloc ); before calling the code that will call the decoder. > Or what if data is missing, out of order, or not what was expected? Or > the file is truncated? Or any of a number of things a proper test suite > would check? It will - in the tests for the real object. >> how those files are processed. >> I've said it before but it bears repeating: layered testing. > Yes, you have. Now if you'd only learn how to do it correctly. I'm lucky in that I already do. You don't appear to have got beyond understanding what it is. -- Ian Collins |
Ian Collins <ian-news@hotmail.com>: Feb 02 08:45AM +1300 Jerry Stuckle wrote: > recalls, either - and are quite stringent in their testing. > Neither are any of the other companies I have consulted for - both as a > developer and a project manager. Ah-ha, that explains it: in this part of the world, we would never let a project manager anywhere near our code! -- Ian Collins |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 02:53PM -0500 On 2/1/2016 2:30 PM, Ian Collins wrote: >> importance of independent testing. > So you claim, but you continue (is is it choose?) to a display a basic > ignorance of the practice. Not at all. >> That's why you have a test group and test system(s). Additionally, the >> test group can test under many more different conditions. > Where did I say otherwise? Exact quote please. You said you wouldn't stress test code on a development system. Look back and see. >> responsible for trying to break things, and will look for ways to make >> it fail. They will often test for things the developer never considered. > Where did I say otherwise? Exact quote please. You said you can't test for out of memory on a development system. Look back and see. >> I'm not the one trying to change the topic. No one mentioned embedded >> systems until you did. > Please explain how mentioning embedded systems changed the topic. No one was talking about embedded systems before you brought it up. Look back and see. >> would be the responsibility of the test group to do the testing. > Here again you choice to ignore the fact that there is more than one > level of testing. It's clear that you didn't do the homework. Oh, yes, I've done the homework. And I've seen the results of your type of testing. A couple of times I've taken over failing projects because the developers were doing the testing instead of writing the code. Projects that were late and over budget. > You appear to be have trouble with your reading comprehension, so I'll > say it again: "From the perspective of software other than its driver, a > device is a class other code". There, is that clear now? Yes, it is perfectly clear you have no idea what you're talking about. >> responses for specific commands, including error conditions. > Do you write code that has indeterminate responses for specific > requests? I hope not. No, I don't. I write code that works. >> well defined by the manufacturer. > Do you write code without a specification? No, you have requirements > that are well defined by the customer. Nope. But I don't try to mock classes, either. It just leads to more work and more bugs. >> my laptop. > Ah, so it is simply a class other code. You appear to be contradicting > yourself.... Nope. Not at all. But you obviously don't understand the difference between hardware and software. >> And the real device driver is written and tested on the device. > ...as well as continuing the 80s practice of lobbing code over the wall > to someone else to test. Yep, sending the code to the people who are experts in testing and leaving the development to those who are experts in developing. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 02:55PM -0500 On 2/1/2016 11:14 AM, Paavo Helde wrote: > something, then you cannot do code refactoring and code redesign. This > means that given enough time the codebase will turn unmanageable, > regardless of how good the design and specifications. If you do, yes. However, that's more work to correct the error and test again. Wasted time for developers and testers. I'm not saying you don't change or refactor code. I'm saying you shouldn't do it needlessly because you didn't test independent classes before dependent ones. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
"Öö Tiib" <ootiib@hot.ee>: Feb 01 07:14AM -0800 On Sunday, 31 January 2016 23:23:37 UTC+2, Mr Flibble wrote: > Python and JavaScript are both toy. Nah ... asm.js is rather impressive already ... even in its current alpha stage. |
Paavo Helde <myfirstname@osa.pri.ee>: Feb 01 05:24PM +0200 On 1.02.2016 16:22, Cholo Lennon wrote: > The iteration process was long because the pattern was changing. It was > your first time, the teacher said; future problems will be solved more > quickly. Seems like that was a wrong task for TDD. If the task is simple and you can foresee all the nuances beforehand, then there is no need to write any tests first. Obviously this depends both on the task and on the person. TDD works best when you need to develop a complex feature or to add a new feature to an existing complex system where you are not able to see all the hairy details in one glance. Working in gradual steps (first get the simplest test to pass, then the next, then the next) provides an approach to tackle such tasks. Cheers Paavo |
Paavo Helde <myfirstname@osa.pri.ee>: Feb 01 05:30PM +0200 > Design first, implement second and unit test third. I fully agree. However, this has nothing to do with the presence or absence of TDD. TDD is used mostly for the implementation step and as a bonus it will often produce initial material for unit tests as well. |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 11:06AM -0500 > Designing software through trial and error by fixing failing "tests" rather than by applying some intelligence and thinking about abstractions, interfaces, class hierarchies and object hierarchies is, quite frankly, both absurd and possibly even harmful and I am amazed that you cannot see this. > TDD is the totally wrong approach to software development. Design first, implement second and unit test third. +1 -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 01 11:10AM -0500 On 2/1/2016 9:22 AM, Cholo Lennon wrote: > IMO most of the time thinking in advance, solves the problem more > quickly and with a better design. > Regards Yup, some temporary agencies love this approach. They can milk the contract for more money. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 01 04:58PM On 01/02/2016 15:24, Paavo Helde wrote: > all the hairy details in one glance. Working in gradual steps (first get > the simplest test to pass, then the next, then the next) provides an > approach to tackle such tasks. Did you even bother reading his second paragraph (snipped)? You've been drinking too much TDD koolaid mate. /Flibble |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 01 05:08PM On 01/02/2016 07:01, Robert Wessel wrote: > tests for private functions. Ugly? A bit, sure. And yes, it > requires the maintenance of enough discipline so that people don't > write TestAbc classes just to poke around where they oughtn't. Sounds horrid. Private methods are NOT supposed to be directly testable as they can break class invariants: any unit test should only affect a class in a way that doesn't break the class invariant which means only testing public methods but this goes back to my original point: TDD only considers the creation of a public method to pass/fail a test. TDD is at odds with proper thinking of how to design and implement with a high degree of quality which should involve creating directly untestable private methods and KEEPING PUBLIC METHODS TO A MINIMUM to increase encapsulation. I repeat my original point: TDD is NOT about keeping public methods to a minimum; it is about creating lots of individually testable public methods hence it is the enemy of encapsulation and good design. /Flibble |
Wouter van Ooijen <wouter@voti.nl>: Feb 01 06:11PM +0100 > Designing software through trial and error by fixing failing "tests" rather than by applying some intelligence and thinking about abstractions, interfaces, class hierarchies and object hierarchies is, quite frankly, both absurd and possibly even harmful and I am amazed that you cannot see this. > TDD is the totally wrong approach to software development. Design first, implement second and unit test third. I see some sense in Test-Driven-Bug-Fixing, but I agree that Test-Driven-Design is madness. Maybe people actaully use Test-Driven-Coding, which might work, but IME the design is the real work, coding is trivial, so I don't care much for any formalised way of coding. Wouter van Ooijen |
Ian Collins <ian-news@hotmail.com>: Feb 02 08:11AM +1300 > amazed that you cannot see this. > TDD is the totally wrong approach to software development. Design > first, implement second and unit test third. I see you have chosen to loose the context so that you can conveniently ignore the fact that the source of your beloved SOLID practices is also a strong advocate off TDD! There's an admission of defeat if ever I saw one... TDD clearly isn't anathema to SOLID practices. -- Ian Collins |
Wouter van Ooijen <wouter@voti.nl>: Feb 01 06:02PM +0100 Op 01-Feb-16 om 3:22 PM schreef Victor Bazarov: > call it _at all_ in some cases), no special term is in common use, I > believe. I have heard a "callee", but not often enough to make it > customary. I often heare and use the term "callee". I think it is widely accepted. Wouter |
4ndre4 <4ndre4@4ndre4.com.invalid>: Feb 01 06:59PM On 01/02/2016 11:22, Stefan Ram wrote: [...] > When a function »m« calls another function »f«, we call »m« a > »client« of »f« Caller, not client. >but how to we call »f« then? Callee. -- 4ndre4 "The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense." (E. Dijkstra) |
Wouter van Ooijen <wouter@voti.nl>: Feb 01 06:07PM +0100 Op 01-Feb-16 om 10:20 AM schreef Juha Nieminen: > always the most efficient integral type of the system, even in future > versions of the platform. You'll never know if the std type you chose > will be the most efficient in a future version. That's why you should use the intN_t types only for layout purposes or to save memory. For general work, use the int_fastN_t types: the fast choice on your platform that holds at least N bits. An additional advantage over using plain int (or short, or whatever) is that you make clear to your reader how many bits you need, even when the erader does not have the context knowledge of the particulr system you wrote this code for. Wouter van Ooijen |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 01 05:45PM On 01/02/2016 09:20, Juha Nieminen wrote: > integers whose maximum size is irrelevant. On that particular platform you > can be sure that int will have a minimum size and will never get smaller > even in future versions of that platform, so you'll be A-ok. You'll want 'int_fast32_t' instead of 'int' then mate. /Flibble |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment