Vir Campestris <vir.campestris@invalid.invalid>: Feb 02 09:16PM On 02/02/2016 00:56, Jerry Stuckle wrote: >> Andy > Tonto was an American Indian name (I forget which tribe), not Spanish. > Tonto was the name of the Lone Ranger's partner. I thought you might be on to something when I found this <https://en.wikipedia.org/wiki/Tonto_Apache> Then I read on... "The Tonto Apache (Dilzhę́'é, also Dilzhe'e, Dilzhe'eh Apache) is one of the groups of Western Apache people. The term is also used for their dialect, one of the three dialects of the Western Apache language (a Southern Athabaskan language). The Chiricahua living to the south called them Ben-et-dine or binii?e'dine' ("brainless people", "people without minds", i.e. "wild", "crazy", "Those who you don't understand").[1] The neighboring Western Apache ethnonym for them was Koun'nde ("wild rough People"), from which the Spanish derived their use of Tonto ("loose", "foolish") for the group. The kindred but enemy Navajo to the north called both, the Tonto Apache and their allies, the Yavapai, Dilzhʼíʼ dinéʼiʼ - "People with high-pitched voices")." Andy |
Paavo Helde <myfirstname@osa.pri.ee>: Feb 02 07:29PM +0200 On 2.02.2016 19:06, Mr Flibble wrote: > Wrong. TDD is about creating the smallest possible unit, initially > failing, that must then be fixed during that iteration and this unit may > or may not be an interface. Using a piece without interface is not possible, so it would not be the "smallest possible unit". |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 12:31PM -0500 On 2/2/2016 11:58 AM, Jorgen Grahn wrote: <snip> > It's true that when you sketch on some unit tests, you may discover > that your interface or implementation doesn't have to be that > grandiose, after all ... True, but analysis/design is a skill, also, one that takes a long time and a lot of practice to learn to do properly. And even then, not everyone is good at it. I've been fortunate to have known some great analyzers/designers over the hears (and, unfortunately, some not so great ones). I've learned a lot from them. But no way would I ever try to compare my skills to theirs. TDD can avoid analysis paralysis - but like any method, it has its own weaknesses. No method is perfect. Some people like it and can make it work well. I haven't found it to be very useful. > implementation itself right. Then after a while I discover that I > cannot tell if it's correct or not, and having the unit tests doesn't > help because I would need to find out how complete /they/ are ... Which is another reason for having a different group performing the tests. Developers should be concentrating on writing good code, not worrying about how to test the code. Testing is a major distraction. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
"Öö Tiib" <ootiib@hot.ee>: Feb 02 09:54AM -0800 On Tuesday, 2 February 2016 19:08:30 UTC+2, Zaphod Beeblebrox wrote: > [...] > > "Agile" basically means "using two-week waterfalls". > No, it doesn't. Why don't you all grab a book and start studying what you don't know, rather than rambling idiocies in a ng? What book? The manifesto isn't book. Agile favor's working software over comprehensive documentation and so there is no point to document much more than what is planned to get really to be done, working, reviewed, tested and demonstrated during the iteration. Agile favors responding to change over following a plan and so the grand plan is reviewed and if needed adjusted after each short iteration. Agile favors customer collaboration over contract negotiation and so that means that the customer will have good overview and control of direction of project and changes. However for that there are demos, backlog and iteration planning, for not to turn a project into madness of monkey business of daily changes. Agile favors individuals and interactions over processes and tools, however for not to turn software development into constantly interacting individuals instead of getting something done there must be clear time when to work with work items and when that interaction takes place. Therefore if the iteration is two weeks then result is planning, documentation, development, testing and demonstration all done with two weeks. IOW full waterfall. |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Feb 02 06:51PM On Tue, 2016-02-02, Zaphod Beeblebrox wrote: >> say what they want; that doesn't mean private: is just a convention. > That's a straw man, dude. I never said that "private" does not mean > private. I said quite the opposite: [...] Since you snipped the text we were discussing, I'll refrain from commenting. ... > You have to trust the user of your C++ code too. If you provide them > the source code for your class, as I said, anyone is technically > able to change your "private" to "public" and use the method anyway. We must be talking about different things. When I say "the users of my class" I'm not talking about people who have downloaded it, or something. I'm talking about other code in the same project, which is under my control (at least the version I'm looking at at that moment). If someone modifies the code, it becomes her problem, not mine. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Ian Collins <ian-news@hotmail.com>: Feb 03 07:58AM +1300 Mr Flibble wrote: >> https://skillsmatter.com/courses/418-uncle-bobs-test-driven-development-and-refactoring > Nevertheless TDD is anathema to best (SOLID) practices related to > the design of non-trivial software systems. The sky is green. The sky is green. The sky is green. The sky is green. The sky is green. The sky is green. The sky is green. The sky is green. You see, repeating bollocks doesn't make it true. If you disagree with Uncle Bob, present your augments. Put up or shut up time. -- Ian Collins |
Cholo Lennon <chololennon@hotmail.com>: Feb 02 04:03PM -0300 On 02/02/2016 02:02 PM, Zaphod Beeblebrox wrote: > so, but maybe you're right. > TDD is an iterative process where you must find the solution > through that iteration. Defining testable interfaces is just a consequence. > HAVE to define testable interfaces. The iteration you are talking > about is just the red-green-refactor process, which is at the > foundation of TDD. AFAIK (in my own experience) the interface is constantly mutating due to the nature of the (TDD) process, so is not important its definition at the beginning. The interface will eventually converge to its final form after several iterations. > been tested individually. The way you assemble them forms the specific > "behaviour" you want to implement. You're not breaking the encapsulation > of your own circuit, assembling those components in a specific way. The analogy with electronic circuits is not always true: The problem here is that sometimes (more often than you might expect) you don't have the "internal" components of a class: ie. a real database, a timer, etc. You have to mock them in order to simulate the real use of your class. And, if you need to change the internal components,then you have to break the encapsulation using for example injection of dependencies. Just to clarify my position: I am not defending TDD, I am just trying to explain what it really is (in another post I wrote my ideas about TDD) Regards -- Cholo Lennon Bs.As. ARG |
Mr Flibble <flibbleREMOVETHISBIT@i42.co.uk>: Feb 02 08:44PM On 02/02/2016 18:58, Ian Collins wrote: > The sky is green. > You see, repeating bollocks doesn't make it true. If you disagree with > Uncle Bob, present your augments. Put up or shut up time. It's quite simple really: SOLID is all about good DESIGN practices but when you are using TDD you are stumbling along fixing failing test cases without designing anything. TDD results in an incoherent mess that cannot be called "design" by any stretch of the imagination. /Flibble |
Ian Collins <ian-news@hotmail.com>: Feb 03 10:05AM +1300 Mr Flibble wrote: > when you are using TDD you are stumbling along fixing failing test cases > without designing anything. TDD results in an incoherent mess that > cannot be called "design" by any stretch of the imagination. Have you read https://blog.8thlight.com/uncle-bob/2014/05/02/ProfessionalismAndTDD.html and the paper it links? I've yet to see any of my colleagues "stumbling along fixing failing test cases", they all know hoe to use TDD well. Have you ever worked on a team that uses it? Given what you have written thus far, that question is probably rhetorical. -- Ian Collins |
Vir Campestris <vir.campestris@invalid.invalid>: Feb 02 09:11PM On 02/02/2016 16:58, Jorgen Grahn wrote: > hen after a while I discover that I > cannot tell if it's correct or not, and having the unit tests doesn't > help because I would need to find out how complete/they/ are ... They're not complete. In non-trivial systems they can't be complete - there are race conditions you can't pick up with simple tests. I know, we once had one it took 18 months to find. Once we knew the trigger we could make it fail most days. Not every day. That doesn't mean you shouldn't have tests, of course. Right now I'm working on Android systems, and I have the reverse of the waterfall principle; I have all the source, but very little documentation. I really don't want to reverse engineer ReiserFS to find out to open a file, and there are occasions I've been doing something equivalent. Andy |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 10:47AM -0500 On 2/2/2016 10:40 AM, David Brown wrote: > have a total inability to learn from discussions like this. If I > thought anyone else was taking your ideas seriously then I might > elaborate for their benefit, but I doubt if it is necessary in this thread. I have the ability to learn from over 30 years of professional programming experience, both as an employee and a consultant. I've seen your way of doing things, and I've seen other ways which work better. > actually mean. It will probably also classify me as a "troll", which > does not mean what you think it means. But I have been called worse on > Usenet, and I will not loss sleep over it. No, you've just always done it one way, and are blind to the possibility that there are better ways. I used to feel like you, until my eyes were opened. As a result, I've been hired to bail out more than one project over the years which were using your methods, running late and over budget. You say I'm unwilling to learn. I could say the same about you. The difference is, I've seen both ways. You've only seen one. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
"Öö Tiib" <ootiib@hot.ee>: Feb 02 08:28AM -0800 On Tuesday, 2 February 2016 17:40:42 UTC+2, David Brown wrote: > have a total inability to learn from discussions like this. If I > thought anyone else was taking your ideas seriously then I might > elaborate for their benefit, but I doubt if it is necessary in this thread. Engineers whom I've worked with keep the shared workplace tidy and in good order. On case of software the workplace is shared repo. Making it dirty with bad code would potentially pointlessly waste time of whole team. So they do not push code into common repo that does not compile or fails unit tests. Continuous integration server is typically set to lament on non-compiling code or failing unit tests and code with missing or disabled unit tests will be marked dirty by review. Lot of the code (typically majority) that I observe in product repo is not code of end product but unit-tests and product-specific scripts and tools. > actually mean. It will probably also classify me as a "troll", which > does not mean what you think it means. But I have been called worse on > Usenet, and I will not loss sleep over it. Who cares? http://xkcd.com/386/ |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 12:20PM -0500 On 2/2/2016 11:28 AM, Öö Tiib wrote: > it dirty with bad code would potentially pointlessly waste time of whole > team. So they do not push code into common repo that does not compile > or fails unit tests. Who said anything about pushing code into the common repo? That happens after tests have been completed. > be marked dirty by review. Lot of the code (typically majority) that I > observe in product repo is not code of end product but unit-tests and > product-specific scripts and tools. Then you are pushing to the repo too quickly. The product repo should only have product-ready code. Who can write to that repo should be quite limited, but reading should be open to anyone who requires it. There are other systems also, though. The test systems should be separate from the development systems. That's where the test are run, and results collected. Additionally, you generally have a working repo, for code that's in the process of being developed, or needed to further develop. For instance, you may need a base class so you can build a derived class; you can use a simplified base class with the appropriate public interface to satisfy the compile requirements (but you won't be able to test the derived class). This allows, for instance, on developer to be working on the base class while another is working on the derived class. But this should not be confused with Ian's "mock class". It is there solely to allow the derived class to compile and link. Testing will be completed when the base class has been developed and tested. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Paavo Helde <myfirstname@osa.pri.ee>: Feb 02 08:06PM +0200 On 2.02.2016 16:48, Jerry Stuckle wrote: > But the design also ensures you change *only* what needs to be changed. > The same is true with programming. A good design gets you going. But > it also shows you what needs to be changed when scope creep occurs. I am starting to have a feeling that much of the activity I call refactoring, you are calling design review. I guess the things can be done this way as well, I just have never seen so detailed design specifications. In software, having the sauna with certain parameters like temperature, humidity, etc is a feature visible to the end user. How the pipes and wires are connected is an implementation detail and does not concern the end user, as long as a couple of rules are followed. If you specify too much implementation details in the blueprint, then it becomes tedious to follow and update. Amusingly, this is exactly how the blueprints are often made nowadays. Blueprints for electricity installments for example specify things like: 2 ordinary wall sockets in this wall, 1 3-phase socket in the garage, etc. It's up to the electrician to install wires as he sees fit, following some general rules. When he is ready, a photo is taken of each wall which documents the location of wires for any case before they are covered up. |
"Öö Tiib" <ootiib@hot.ee>: Feb 02 10:27AM -0800 On Tuesday, 2 February 2016 19:20:03 UTC+2, Jerry Stuckle wrote: > > or fails unit tests. > Who said anything about pushing code into the common repo? That happens > after tests have been completed. After all automatic tests have been completed our continuous integration simply tags it and makes a (nightly) build for manual testers. If something fails then there is no point to waste time of manual testers with it. > and results collected. Additionally, you generally have a working repo, > for code that's in the process of being developed, or needed to further > develop. I haven't seen separate repos for that. Typically there is all in one. Released main product and its possible service packs and brand label branches and working branches for next versions all in same common repo. Sometimes decades of work, typically migrated to git or mercurial by now but all changesets ever made are still there. If someone wants to have some stripped down slice of it in their local copy then it is up to them of course. |
Gareth Owen <gwowen@gmail.com>: Feb 02 06:55PM > You're welcome, trolling noob. Sick burn, Donald. |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Feb 02 07:32PM On Sun, 2016-01-31, Ian Collins wrote: > Jorgen Grahn wrote: >> On Fri, 2016-01-29, Ian Collins wrote: ... >> as a starting point. > Doesn't anyone have colleagues any more? Sorry, I just hate the term > "coworkers"! Fine, it's not my native language so I may easily have picked up a word which would have annoyed me, too! I think I got it off Usenet. But you didn't answer the question. Does that mean your choices re: how much to mock are based on your experience, rather than what some group of people says is best practice? Or perhaps it just means you didn't answer. Sorry to be annoying about it, but the answer would be useful to me. > testing. Any project team will have to find their own happy place and > where that place lives between the two extremes will depend on the > domain and the personalities of the team members. That's fair, but those things may never settle, so that there's no consensus in the project, just frustration and worse tests than /any/ member really wants. (Imagine the comp.lang.c++ regulars forming a team. That would be an interesting nightmare!) ... > when the method under test is called. If for example Bar implements a > state machine you may have to jump through hoops to get the real Bar > into the correct state. That sounds close to white-box testing Foo ... but yes, I can imagine how a certain Foo::bar state is valid (from Foo's point of view) but hard to reach just by manipulating Foo. Come to think of it, resource allocation failure in Bar is really just a special case of that. I agree; it's a valid case. [snip] /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Jorgen Grahn <grahn+nntp@snipabacken.se>: Feb 02 07:56PM On Sun, 2016-01-31, Jerry Stuckle wrote: >> automated tasks I can think of, provided you have half-decent tools. > It depends. I've seen a lot of small projects where the effort of > automating the test exceeds the manual effort required to do the testing. Bad tools, perhaps? I use make, and the overhead for automating unit tests is ten lines of boilerplate Makefile, plus one line for every test suite that's added. That's about ten seconds of effort, once. >> to run on the host. But then doing it manually would be even harder.) > Maybe, maybe not. The tests would be the same, even when > cross-compiling. Yes, but unit tests (and other tests, perhaps) would be the only reason to build for the host system rather than the target. Another compiler, other libraries to link with, #ifdefs in the code if you're unlucky and/or stupid ... > be manually tested because the tools to do it automatically may not be > available on the target system, system constraints may not allow for the > tests to be run, or any of a number of other reasons. The way I define unit tests, they are run in your host environment rather than on the target. There are very good reasons to test on different levels on the target, but I'd call that something else. /Jorgen -- // Jorgen Grahn <grahn@ Oo o. . . \X/ snipabacken.se> O o . |
Ian Collins <ian-news@hotmail.com>: Feb 03 08:57AM +1300 Jorgen Grahn wrote: > group of people says is best practice? Or perhaps it just means you > didn't answer. > Sorry to be annoying about it, but the answer would be useful to me. This was summed it up pretty well else-thread: if it's a value type (such as std::string), just use it. If it provides application logic, mock it. > That's fair, but those things may never settle, so that there's no > consensus in the project, just frustration and worse tests than /any/ > member really wants. There will always be some tension in a team, managing that tension and negotiating a consensus is one of the more interesting aspects of a team lead's job! Some teams I have worked with would have given Henry Kissinger a tough time, but we managed. > (Imagine the comp.lang.c++ regulars forming a team. That would be > an interesting nightmare!) It would certainly fall into that category! > hard to reach just by manipulating Foo. Come to think of it, resource > allocation failure in Bar is really just a special case of that. > I agree; it's a valid case. I would probably expand the earlier summary to include mocking of value types if they can fail (say a numeric type that throws on divide by zero) when testing code that handles the failure. -- Ian Collins |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 03:14PM -0500 On 2/2/2016 1:06 PM, Paavo Helde wrote: > refactoring, you are calling design review. I guess the things can be > done this way as well, I just have never seen so detailed design > specifications. Yes and no. Design review means changing the design (which is on paper) to meet the new requirements. The result may be refactoring of already written code, and/or redesign of code which hasn't been written yet. > end user, as long as a couple of rules are followed. If you specify too > much implementation details in the blueprint, then it becomes tedious to > follow and update. Again, yes and no. For instance, for a class you need to specify things like the external interface (public parts), because that is what other code depends on. This includes the actions taken by each public method, as well as resources external to the class (i.e. database connection, etc.). You must also specify the data which must be stored in the object. What you do not do is specify how things are going to be done. For instance, if you were going to specify the requirements for malloc(), you would specify something like "Allocate memory of the size passed and return a pointer to it". You would not specify how to get a block of memory from the OS, how to suballocate that memory or any of the other operations. That's up to the programmer doing the work. > following some general rules. When he is ready, a photo is taken of each > wall which documents the location of wires for any case before they are > covered up. Which is how it should be - both to the electrician and the programmer. The only time there should be instructed on how to run the wires is if there is an some interference that he doesn't know about. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 03:14PM -0500 On 2/2/2016 1:55 PM, Gareth Owen wrote: > Jerry Stuckle <jstucklex@attglobal.net> writes: >> You're welcome, trolling noob. > Sick burn, Donald. Thank you again for equating me to a very successful businessman. It's an honor. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Gareth Owen <gwowen@gmail.com>: Feb 02 08:20PM >>> You're welcome, trolling noob. >> Sick burn, Donald. > Thank you again for equating me to a very successful businessman. Chapter 11 bankruptcy is the gift that keeps giving. |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 03:27PM -0500 On 2/2/2016 1:27 PM, Öö Tiib wrote: > simply tags it and makes a (nightly) build for manual testers. If > something fails then there is no point to waste time of manual testers > with it. True. But the code should not go into a common repo until it has been tested. You should have a testing system (actually, often times more than one) with code that has not been thoroughly tested. This system would also have the tests and test results. Not the production repo. > but all changesets ever made are still there. If someone wants to have > some stripped down slice of it in their local copy then it is up to > them of course. I have - it's quite common on large projects, and not that rare in smaller projects. In fact, it's quite advisable to keep separate repos. Otherwise things can easily be missed. Even service packs should be kept in separate repos to prevent accidental contamination with the original code. A typical system would have a release repo. When starting a service pack, this repo would be copied to a new repo on the development system. As changes are identified, the necessary files are moved from the the new repo to the development system, then to test, and when they pass the tests, to a final service pack repo. When you're done, the service pack repo contains all the files for that service pack - and only the files for that service pack. Having separate repos like this allows for backup/archiving of code as it was shipped as well as code currently being worked on. It sounds complicated, but really isn't. You have a start repo, a development repo, a test repo and a production repo. Code is only changed on the development repo and testing on the test repo. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 03:44PM -0500 On 2/2/2016 2:56 PM, Jorgen Grahn wrote: > Bad tools, perhaps? I use make, and the overhead for automating unit > tests is ten lines of boilerplate Makefile, plus one line for every > test suite that's added. That's about ten seconds of effort, once. Testing is more than just a make file. You have to define the tests, create the code for testing, ensure good input is processed properly and invalid input rejected. You need to duplicate error conditions, some of which can terminate the test - and finally, evaluate the results. Thorough testing is more than adding one line to a make file. > reason to build for the host system rather than the target. Another > compiler, other libraries to link with, #ifdefs in the code if you're > unlucky and/or stupid ... Yes, but those tests would be the same, even when cross-compiling. Right now I'm building some ARM based code. I'm doing it on a PC, though, and testing on the PC. When I'm happy with the way it works, I can cross compile to ARM and run it in an emulator. I recompile the tests and they show the same results. And when I finally upload to the hardware, I again compile the code and the tests. There are no #ifdefs or any other hardware-related code involved. And if you follow the standards and have problems, either the compiler or the library you're using is wrong. > rather than on the target. There are very good reasons to test on > different levels on the target, but I'd call that something else. > /Jorgen A bit different than most people then. Most people I've run into have defined unit tests as code which can be tested by itself. A class that parses date fields would be a good example - it takes a string representation of a string and produces month, day and year members. The class can be tested in the host environment, but it also needs to be tested on the target environment before something depending on it is tested. For instance, you may have 32 bit integers on your host system, but the target may be 16 bit integers. That may or may not cause problems - but you won't know until you test on the target system. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
Jerry Stuckle <jstucklex@attglobal.net>: Feb 02 03:48PM -0500 On 2/2/2016 3:20 PM, Gareth Owen wrote: >>> Sick burn, Donald. >> Thank you again for equating me to a very successful businessman. > Chapter 11 bankruptcy is the gift that keeps giving. I don't see YOU being worth a few billion dollars. But then trolls, when they can't refute facts, have to resort to ad hominem attacks. Except it didn't work in this case. -- ================== Remove the "x" from my email address Jerry Stuckle jstucklex@attglobal.net ================== |
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page. To unsubscribe from this group and stop receiving emails from it send an email to comp.lang.c+++unsubscribe@googlegroups.com. |
No comments:
Post a Comment