Test-Driven Development? Give me a break…

Update: At the bottom of this post, I’ve linked to two large and quite different discussions of this post, both of which are worth reading…

Update 2: If the contents of this post make you angry, okay. It was written somewhat brashly. But, if the title alone makes you angry, and you decide this is an article about “Why Testing Code Sucks” without having read it, you’ve missed the point. Or I explained it badly 🙂

Some things programmers say can be massive red flags. When I hear someone start advocating Test-Driven Development as the One True Programming Methodology, that’s a red flag, and I start to assume you’re either a shitty (or inexperienced) programmer, or some kind of Agile Testing Consultant (which normally implies the former).Testing is a tool for helping you, not for using to engage in a “more pious than thou” dick-swinging my Cucumber is bigger than yours idiocy. Testing is about giving you the developer useful and quick feedback about if you’re on the right path, and if you’ve broken something, and for warning people who come after you if they’ve broken something. It’s not an arcane methodology that somehow has some magical “making your code better” side-effect…

The whole concept of Test-Driven Development is hocus, and embracing it as your philosophy, criminal. Instead: Developer-Driven Testing. Give yourself and your coworkers useful tools for solving problems and supporting yourselves, rather than disappearing in to some testing hell where you’re doing it a certain way because you’re supposed to.

Have I had experience (and much value) out of sometimes writing tests for certain problem classes before writing any code? Yes. Changes to existing functionality are often a good candidate. Small and well-defined pieces of work, or little add-ons to already tested code are another.

But the demand that you should always write your tests first? Give me a break.

This is idiocy during a design or hacking or greenfield phase of development. Allowing your tests to dictate your code (rather than influence the design of modular code) and to dictate your design because you wrote over-invasive test is a massive fail.

Writing tests before code works pretty well in some situations. Test Driven Development, as handed down to us mortals by Agile Testing Experts and other assorted shills, is hocus.

Labouring under the idea that Tests Must Come First (and everything I’ve seen, and everything I do see now suggests that that is the central idea in TDD – you write a test, then you write the code to pass it) without pivoting to see that testing is a useful practice in so much as it helps developers is the wrong approach.

Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality first in order to be able to write those tests, or you write a test that requires most of the software to be finished, or you cheat and fudge it. The former is the right approach in a small number of situations – tests around bugs, or small, very well-defined pieces of functionality).

Making tests a central part of the process because they’re useful to developers? Awesome. Dictating a workflow to developers that works in some cases as the One True Way: ridiculous.

Testing is about helping developers, and recognizing that automated testing is about benefit to developers, rather than cargo-culting a workflow and decreeing that one size fits all.

Writing tests first as a tool to be deployed where it works is “Developer Driven Testing” – focusing on making the developer more productive by choosing the right tool for the job. Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn’t – that’s not right.

Discussion and thoughts (posted a few hours later)…

I wrote this a few short hours ago, and it’s already generated quite the discussion.

On Hacker News, there’s a discussion that I think asks a lot of good questions, and there’s a real set of well-reasoned opinions. I have been responding on there quite a bit with the username peteretep.

On Reddit, the debate is a little more … uh … robust. There are a lot of people defending writing automated tests. As this blog is largely meant to move forward as being a testing advocacy and practical advice resource, I’ve clearly miscommunicated my thoughts, and not made it clear enough that I think software testing is pretty darn awesome, but I’m put off by slavish adherence to a particular methodology!

If you’ve posted a comment on the blog and it’s not there yet, sorry. Some are getting caught in the spam folder. I’m not censoring anyone, and I’m not planning to, so please be patient!

Anyway, the whole thing serves me right for putting together my first blog post by copy-pasting from a bunch of HN comments I’d made. The next article is a walk-through of retro-fitting functional testing to large web-apps that don’t already have it, and in such a way as the whole dev team starts using it.

By Peter

I'm a CTO, Consultant, Founder, Student, Open-Source Developer, and soon-to-be husband. Somewhere I find time to write this blog.

139 comments

  1. Nicely said. Clearly the voice of experience.

    Tests are only valuable if they're well thought out, and actually give more benefit than their cost.

    I've seen way too many examples of automated tests that cause more problems than they solve.

  2. You seem to live in a world of hacking together toy software. One day when you become a software ENGINEER who has to build complex, long-lived software as part of a team of people, following modern engineering practices (model-driven development, design-by-contract), upon which people's well-being depends, you will change your tune, I suspect. As son as you have DESIGNED a component (at any level of granularity) you can derive a good set of test cases for it using established techniques developed by the testing community over decades. But you don't design, do you? Your strongly-worded hissy-fit of a blog post suggests that you are far from being a professional, and I hope you're not writing any important software (for the good of mankind).

    If you ally wanted to attack test-driven development, you could at least have taken the time to learn what it's all about, and could have tried to construct halfway-decent logical arguments against the (alleged, according to you) benefits. Instead, it seems you had a really bad day trying to write some complex code, and instead of kicking your dog, decided to add this giant fit of misunderstanding to the world.

    Perhaps you should focus on the skills that good software engineers really are made of. (Hint: it's not programming.)

        1. Also think Mr. Loubser’s comments are uncalled for, and show him in a poor light … NOT the writer of the original article, with which I completely agree. Because our industry is being taken over by Agile and TD, and because they are either irrelevant or a hindrance to most of the software development that I do (NOT “toy software” Mr. Loubser) I am thinking of getting out of programming altogether. They are taking all the fun out of it, and preventing good programmers from exercising their skills honestly.

          1. I hope you will stop programming altogether, the world would be a better one without the ones like you.

          2. Anon knows nothing about the software I have developed in over 30 years in the industry in which I wrote APPROPRIATE tests and was not hamstrung by some semi-religious requirement to always write tests first. Applications included flight control systems, oxygen lance steelmaking, factory automation, applications running on 10’s of thousands of ATMs, operating software enhancements etc.). Not exactly applications where the occasional glitch can be tolerated. As he also hides behind “Anon” I will treat his pathetic comment with the contempt that it deserves.

      1. But it’s true… I guess the author’s biggest software was not more than 10.000 lines of code, and he has never worked with more than 10 people on the same application.

        1. Not worked with more than ten people? Who the hell wants a team that even APPROACHES this size. I agree with the original author. I’ve been a developer since 1983 and I’ve seen TDD and XP completely destroy entire development teams because it becomes a complete cult religion rather than a respectful new use of tools to get a job done. You know what your mommy said. Take the middle way. Extreme anything is just asking for trouble. Personally I hate TDD, and my software seems to run just fine without it. I don’t mind unit tests and code reviews, but I don’t need your friggin TDD diapers, thank you very much. Besides, XP came out of a FAILED payroll software project. Guess they were so busy “refactoring” and building tests that they couldn’t get the project finished.

          1. “A survey of 1,027, mainly private sector, IT projects published in the 2001 British Computer Society Review showed that only 130 (12.7 per cent) succeeded. ”
            http://tinyurl.com/brmx3xg

            Maybe you have always been in that 12.7%, but many across the industry notice that software projects are often failing. TDD was always really an attempt to avoid this. Perhaps the TDD you have seen was not practiced well? That is the main reason I have seen for it not working on projects. When practiced well, I have found it very (not ‘extremely’!) useful.

          2. Adrian O’sullivan,

            Mentioned survey doesn’t mention any correlation between project failure and (not) practicing XP or TDD so we can assume that TDD which “was always really an attempt to avoid this” is still nothing more than an attempt.
            Projects are failing indeed, and no doubt many of them failing because of poorly defined requirements. Working with well-defined requirements is a pleasure – whether using TDD or not.
            How does TDD turn poor and ambiguous requirements into consistent and clear? It doesn’t.

    1. Well I clicked the link above on Dawid Loubser and it came up with a section entitle “About me”…which contained…. nothing….QED as they say

  3. Great article, one of the greatest programmers Ive known is an advocated of TDD, but is also smart and balances reality with best practices. I really enjoyed the honesty in your article. Thanks! My colleague who is a strong advocate has made me consider this paradigm more seriously and its great to be knowledgeable when and how to apply certain approaches.

  4. I think in somepoint your right but
    you can write TDD as Feature lists this i think is a good thing and iam using it often on web Project cause i first think what i wanne do before i do it. Often Programmers do something write shitty code and then they are to lazzy to fix it… this is not a good programming style

    sry for my bad english

  5. Dawid Loubser,

    Using TDD does not instantly make you a superior engineer and everybody else a toy software maker.

    You are just backing up authors point by being full of your self just because you use specific development approach that works for you.

  6. I agree with the idea of the article. We don't need TDD Palladins. It's usually a bad idea to start from a test case when you develop new piece of functionality and have little idea of how it will look like when it's done. On the other hand, I prefer one unit test, which is well thought and tests actual functionality, instead of 10 tests for getters and setters (which I've seen in some code). TDD if goes mad, is a monster.

    1. Rafat,

      I am not a TDD expert, but I am working to become a ninja :). I really believe in the ideas and have seen it solve many problems. I found your comment interesting because TDD has helped in the exact situation you describe as being “a bad idea”. When you have new functionality and little idea of how it should work is exactly when TDD is great. It stops “what if” questions, by focusing what what the goal of the functionality is as well as flushing out an API without aimlessly coding features that end up not being needed.

      It focuses work by picking of specific pieces and functionality that you KNOW you need and leaves any of the ambiguity to be solved late.

      1. The only reason to create a bunch of code you never end up needing is because you didn’t take the time to think of what you only need now, and then limit your API to that. TDD is not a good replacement for lazy thinking or not taking the time to DESIGN something. Developers need to put themselves in the consumers’ shoes, take some time to DESIGN a system, and stop looking to magical pop cult fiction to do their design work for them. One of the reasons XP and TDD are often lumped together is that XP is just sloppy rushed “constant refactoring” programming that requires the testing to make sure you’re not checking in junk code. It’s like saying I want to provide the baby (XP) and the babysitter (TDD) instead of just being an adult developer, taking the time to THINK about design and consequences, and following it up with code reviews and unit tests. I despise TDD.

      2. There is a question , TDD make us focus on the feature . So we just finish code work today .If tomorrow some new feature is needed . We need lots of time to refactoring .

      3. “When you have new functionality and little idea of how it should work is exactly when TDD is great… flushing out an API without aimlessly coding features that end up not being needed”

        When addressing vague requirements without upfront design you’re indeed likely to end up with bulk of useless code. How does doing the same with TDD will make your code useful? Answer is simple: it won’t. You will simply end up with the same useless code + tests for useless code i.e. even more useless code.

        TDD is a fraud.

    2. Rusin is right. We do not need TDD Paladins. In fact we do not need any of the evangelists that think the latest fad, which they have adopted enthusiastically, is the ONE TRUE WAY. All of these ideas have some value, but to force them onto everyone, and to look down on anyone that does not share your commitment and enthusiasm to whatever is “flavour of the month” is ridiculous. It is like throwing out all the tools in your home maintenance toolkit except one, and whether you keep the hammer, the wrench, the screwdriver, the craft knife or the saw, it will be just right for some tasks, tolerable for others, and downright useless for most.

      As a software engineer I want a selection of specialised development tools, from which I can choose the most appropriate. Sometimes that will be TDD and/or Agile. Often it will not.

      It is a delusion of management that if only they could enforce the “correct” ways of working they would produce great software from mediocre programmers. They will not. The most important things you need to create good software reasonably quickly are:

      An understanding of the problem domain
      Expertise in the programming languages, other tools and supporting environment required
      Design expertise (including the ability to anticipate unintended consequences)

      and last but not least:

      Caring enough about what you are doing to make whatever effort it takes to get it right

      These are what really matter. Everything else cis much less important.

    3. I believe what the article was trying to say is, DO NOT TAKE TDD AS A RELIGION! , I’ve been doing some research around TDD recently, watching lot’s of tutorials, articles, googling… . to me it seems, lot’s of people saying TDD like a must, the reality is , nothing becomes a must, everything changes, the only rule doesn’t change is ‘everything changes’. So if you think your program should be really bug free, and have the time to do it, you can try TDD, or even using ADA, Once I’ve tried to write code using ADA, because I was trying to deal with a RTOS, running in a hardware, do not want to going through with debugging process on it, since it wasn’t much pleasant (at least to me). And if you want to get the things done faster, and more pleasant way, you can try something else, or even test last, IT DEPENDS. Please don’t say that, I have live with test code for the rest of my life, I don’t want to.
      I do think it’s awesome sometimes, specially for continues deployment, but is it suitable for the project, you got to look at the situation and decide.
      I used to deal with embedded systems, watched guys who tries to use linux for every kind of situation, You don’t need an operating system for a toy car, you have to use RTEMS, vxWorks for missile systems, and probably use ADA for the code, you don’t want to use RTEMS, or ADA for a desktop application, Yes of course you can , but it doesn’t feel pleasant that way, believe me I tried to use ADA for everything once. The debate not about TDD is good or evil, it’s about pros/cons, then according to that pros/cons knowledge , you can decide if it fits on the project.

  7. TDD rocks, its just a matter of how its implemented and using it when it fits.

    Small-team projects doing RAD are normally greatly benefited by TDD.

    1. The acronyms irritate me the most, and my first code results came out of a 4K RAM TRS-80. I’ve dealt with them all of my life, and if there’s one thing I hate, it’s “uber kewl” (read stupid ass pompous) buzzwords and acronyms where nobody really gives a shit and even fewer want to find out later could have been said and understood in plain English.

      Like “getter/setter” and XP shit. Getter/Setter is fine, but to have someone in an interview or common conversation conclude that I don’t have any experience writing code in teams via brainstorming sessions, which is all I consider these new methodologies, because I look at them funny when they use terms like Agile, Scrum, and XP is annoying.

      Programming people today swear up and down they invented shit because they slapped a bitchen name on it. Gathering in a conference room with a whiteboard and discussing code sections then going out and building working sections without refactoring until the agreed upon section is up and running, and then not agreeing to add new features if they bust the timeline? Sorry but that’s not “Agile” or “Scrum”. That’s common sense, and both teams and individuals have been doing it for decades.

      The only difference is the people standing when everyone else is sitting don’t go by goofy names like “Scrum Master” and think they’re anything different than a “Project Lead”. These dudes should just grab up a copy of Deities and Demigods, roll back to being called “Dungeon Master” and be done with it.

      And by the way you’re not immune. “RAD”. So then is there a “SAD”, as in Slow Application Development? Then what is the RAD acronym for besides looking kewl when you talk up a client? That’s the very shit everyone hates about IT and App development, and the people who handle both.

      1. I did pair programming and pair design when it was, for the most, still unknown. I didn’t read about it, there weren’t papers or books on the subject, it was something natural.
        William Pena has been the first one to talk about the XP “stories” in the “Problem Seeking: An Architectural Programming Primer”, 1977. XP stories are, in fact, snow cards.
        All this XP and Agile bullshit is just propaganda to sell books, courses and tools. A mixture of old, common sense, and well known practices.

  8. Dawid Loubser: a man of zero substance for an argument.

    Dawid – experienced teams often don't write tests because we have to make money. We hate it, but we need to get a check. TDD is a fun exercise but isn't really practiced often. Yet when attempted you have a lot of tests that are good to go.

    Don't be such a douche, chill out and just read what everyone else is saying. And when you attempt a counter point, back it up with real facts and not sound like the agile consultant the author was ripping on.

      1. That means that both could be wrong and you’d never know it.

        Part of the trouble is that half the people that are Agile and TDD consultants cold not write a working 10-line shell script. They spend time insulting us with elementary stuff that you can grasp in minutes, but never deal with the real life problem of testing large and complex systems.

        Next there is the problem that a test can end up being just as complex as the code being tested. Zhangyang’s idea is not a solution.

        And there is the idiocy of employing smart people, that have a deep interest in software and programming, and the unusual minds that can deal with that stuff, then ignoring their judgement and expertise, and micro-managing HOW they do what they are being asked to do.

        Agile and TDD (as currently preached) are SOMETIMES good tools for organising work. To think that they are always the only sensible methods is ridiculous. They should be added to our tools and used when appropriate. Not foisted and forced on us by non-technical management types.

        1. The chances of them both being wrong are far less than individually.
          I think you’re misunderstanding. Think of it like accounting double-entry book keeping. Mistakes can’t be afforded in accountancy, so they do it twice, because the chances of making the same mistake twice are extremely small.
          Say the chances of writing a bug in a function is 1 in 100, now you write a test that should verify the same thing, and that also has a 1 in 100 chance of being wrong. But both wrong together is 1 in 100×100=10,000. But then also take into account they have to be wrong in the same way, and it’s even less likely.
          This is the power TDD offers.

          1. That’s a bad analogy really, double-entry bookkeeping doesn’t mean what you think it does. The two entries are made in two separate “ledgers” and are balancing factors. They facilitate the duality of all financial transactions. They do offer an overall check when a balance sheet is created but by that point it is like finding a needle in a haystack.

            Also your probability doesn’t take into account convergence and ignores systemic coding failures. Most bugs are due to a logical misunderstanding, in TDD this appears in the test first and then is quickly incorporated into the code. If you want positive outcomes from your tests you should get a different dev to write them (or better still a dedicated tester). Additionally tests can also cause errors, a mistake in a test first scenario can be the constraint that ensures the code is also produced wrong (I call it green tick fever – its green = its right).

            I have quite a bit of experience in test first teams now and the defect count hasn’t really changed, but the code quality and software architecture has nose dived. Failure to see the overall design because of a narrow view seems to be the cause, or it could just be the general decline in developer capabilities, who knows.

            TDD is very optional, but automated testing is really not (unless it is a tactical app). First or after makes no odds, whichever way you work best. Like everything else, if you let your process replace thinking then you are going to make a mess whatever that process is.

  9. Aleksey,

    Hating TDD with a passion doesn't make you one either, and advocating TDD doesn't make you a bad one. Tools and methodologies are nothing more than tools and methodologies. You can have a major preference for one, but if you're a good developer/engineer, you can adapt to whatever is being used.

    Before I jump in here, I'll mention; my group doesn't advocate TDD, but it does require unit testing in some form. But let's see here:

    Peter,

    > Allowing your tests to dictate your code (rather than influence the design of modular code) and to dictate your design because you wrote over-invasive test is a massive fail.

    Yep. Doing TDD also means that you actually have to be good about writing testable code, and writing -good- tests. You're applying TDD over the top of other software engineering best practices. If you don't, you're just going to wind up shooting yourself in the foot. Not using TDD but writing over-invasive tests is also a massive fail – it has nothing to do with TDD.

    Let's say I'm writing a server which reads data from two sources, performs some complicated data munging, and returns some answer. Simple tests for your DAOs, write the DAOs. Nothing too invasive so far. Write tests for your data munging, and implement the munging algorithm. No over invasive tests, so far, and nothing has dictated my design. Each piece is logically going to do what it's going to do. Finally, the overall server tests, and the server itself.

    If you take the other directional approach, you write your tests for the server, mocking out the algorithm (meaning you don't have to write the rest yet, so long as your mocks obey the contracts of the algorithm class), etc.

    > Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality

    That's not really a bad thing. It's sort of the point of unit testing in general – you don't know that your higher level components are working unless you know the lower level ones are.

    > write a test that requires most of the software to be finished, or you cheat and fudge it.
    If you have software with well defined APIs, "fudging it" is fine. I can drop out my ExpensiveBullshitAlgorithm with a mock which returns 3 when the inputs are 1 and 2. Those are the expectations of the system, and we'll prove that ExpensiveBullshitAlgorithm actually returns 3 for inputs of 1 and 2 when we write the tests there. This is not "cheating", this is "mocking", and it's only something you can get away with if you're actually writing solid tests for your components, and they obey defined APIs.

    > Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn't – that's not right.
    Consistency is important in large group projects. If half the team is working in one way, and half the team is working in another, you ARE going to clash. It's not necessarily the One True Way – nothing is. But in the real world there is quite often the One Way We Decided On For Consistency and You're a Big Boy/Girl So You Can Adapt, Right? And once you get used to it, you might even like it.

  10. Everything has a balance, the problem I see in some shops is they tend to take the chosen practice to one extreme or the other without balancing actual need or value.

    Testing helps, writing tests first or last I feel is inconsequential to the ultimate goal of the tests if you really are doing true unit testing (some shops only think they are unit testing). Writing do nothing tests that just improve your coverage ratio are not worth the time. Spending an extra hour on that method with the complex object as an input parameter to cover more negative cases would be more valuable.

  11. Use the right tool for the job! If you have a large, complex codebase, TDD is the ONLY sensible way to keep things manageable while you change fundamentals. TDD makes little sense for a 200 line project knocked off in an afternoon, or anybody using the waterfall method.

  12. Great article. I don't care whether you like TDD or not, just don't be one of those douches who regurgitates all kinds of bullshit about how great some method is.

    I, personally, like to test after I'm fairly sure of the design. When I've devised my overall algorithm and determined the interfaces and implementations, then I'll write tests as I go so I can safely refactor as I work.

    Also, TDD for interfaces like a web page is just plain bullshit. When you do TDD for integration tests on a web application, you're just wasting your time. Test the hell out of your unit tests, write functional tests as you understand the problem more and you become more sure of the solution, and if write interface tests once you're done and you're concerned with them breaking during deployment.

  13. Every methodology can be taken to an extreme. RUP, Waterfall, and I think TDD is especially prone to it.

    It reminds me of the relational database modeling vs. OOP debates during RUP's heyday. What do you do first, data model or class model. I would argue it doesn't matter as long as you get the same result. Their are rules that ORM's have figured out how to reverse and forward engineer between each other. If those rules exist, then why can't we model either way first.

    Similarly whether we write tests first or code first should be the purview of the person writing the program. Do I really want to write a test first for every stored procedure I write. Of course, I do not. I am sick of seeing articles on how to do TDD in the database layer. I am equally sick of having to create my own mocks to mimic a layer I can get to in ms.

    I think it has its place, but it does promote coding bloat. Now, RUP promotes documentation bloat. I truly believe, that quality software engineers (aka master craftsmen) can choose what methodologies to draw from as needed. To say, I have the way and its the only way is telling me that you are afraid to think outside your own box.

  14. "writing tests first or last I feel is inconsequential"

    The point of TDD is that writing your tests first forces you to use your code before writing it, which in theory leads to better designed, simpler interfaces. The tests then serve as the formal specification for your interface, which often leads to easier and quicker implementation of your interface. Since your code's specification is now being tested, it is very easy to prove to stakeholders that your code works as intended, and is often easier to change when stakeholders change their minds. If you write your implementation first, you may not realize until later down the road that your interface is awkward or difficult to use, and by then it takes more time to fix it.

    TDD is not always necessary or even the best way to do things. TDD is probably overkill if you're working on a simple CRUD form with no logic outside of validation and persistence. TDD's advantages show themselves quickly when working with a technology or business domain that you're not experienced with, when you're working with complex systems, and when you're creating public apis. In these cases, TDD helps get your design correct the 1st try, and saves a lot of time. In addition TDD has many advantages when working with a large team. Any time 'wasted' writing tests is more than made up for by elimination of technical debt and time spent refactoring or fix bugs.

  15. Something that should be kept in mind about TDD is that nobody expects you to do it all the time — even its most staunch promoters. Full test-first code is an ideal, as something to be worked for. Whether you will reach 100% is dependent on a lot of factors, like skill, time, understanding, tools, your framework and language, coworkers, etc.

    But if you don't hit 100%, you don't throw up your hands, curse the method and write an angry blog post about how TDD upsets you.

    Instead, you say: "Next time, I'll do better." And you do.

    That's the difference between test driven development and your "developer driven development." TDD is a method of producing tested, working code, it takes a long time to master (I'm not even there yet), and it's an ideal that its practitioners work towards. Your DDD is a method that says that whatever "works" today is fine, whoever you are and whatever you do today, and testing is nice so long as it's in some form before or after the code is written. Kinda vague…

    But since you mock TDD as the "one try way," but I have a question for you: If you're not able to write simple test cases for all of the code you write, even before that code, how can you be satisfied with yourself?

  16. Let's say you make a statement about your component or system, such as "under circumstances X, given input Y, it will produce Z". One of only two truths apply: Option A: you make the statement based on the belief that your code (which other members of your team have perhaps modified) is sound, or based on experience. Let's call this "faith". Option B: You make the statement because there is a unit test that proves it ("proof"). In other fields of engineering, things are not built based on faith.

    Unit tests, at every level of granularity, are the only way to prove that your system works. Anything less fosters a self-important, "code ownership", hacking culture, and virtually proves that you are coding without having performed any real design.

    Anybody is free to follow this style of work, but in the 21st century, this is thoroughly amateur, in my opinion, and suited only to toy software. Are you really willing to bet your job, and the experience of your clients, on faith?

    1. Ok. I have a sensor. I want to write a parser that parses the data from the sensor. TDD would say, write a test that mimics a message described in the protocol manual, and test that the parser would parse the message correctly.

      So I write the test. I write the parser. The parser passes the test. And now I can merrily hand that code off, and the world is right as rain.

      But there’s a typo in the manual, and the firmware in the sensor kicks out an extra tab character at the start of the message. How did I discover this? By hooking the sensor up to the parser and doing a live test with the real hardware.

      So what did I gain? The code never really “worked” until I tested it against the actual sensor. So then I modify the parser and the test to deal with the extra tab character… but… since I’ve already got the parser working with the real sensor, the unit test is redundant, and possibly a source for more failures to be introduced, since more code is being maintained.

      Now, if I had tested the parser against the live hardware to start with… at least I wouldn’t have had to fix the redundant unit test.

      So you say your unit test is “proof.” I say a unit test is where you represent your “faith” in code.

      1. “TDD would say, write a test that mimics a message described in the protocol manual, and test that the parser would parse the message correctly.”

        The first step of TDD is not to mock out objects, APIs, or devices. The first step is to create a failing test. Then make it pass with messy production code. Then refactor, while checking that the test still passes.

        Your case is a bit special because you’re testing 3rd party code. TDD is not really relevant here, because the code you’re using is not covered by your own tests.

        Uncle Bob (known for his love of TDD), in his book “Clean Code: A Handbook of Agile Software Craftsmanship” talks about how to integrate 3rd party libraries (or anything that’s not coming from you) using what he calls “learning tests”.
        The idea : start writing tests that check your understanding of the API. Focus on what you want to use, of course.
        In a few minutes, you’ll know precisly what the code really does. It the end, it costed you nothing because you had to learn the API anyway. And when the library changes, you’ll have a complete set of tests making sure your code won’t be broken by this change.

        This is actually completely out of topic because as i said this is not about TDD. It’s about 3rd party libraries and boundaries. I just wanted to make sure anyone reading the comment above would not leave spreading wrong views about TDD.

  17. Dawid,

    Your comment is on the money when you point out that testing as a developer is hugely important. That's really what this blog is/will be about.

    I'm not sure what your comment has to do with the methodology of Test-Driven Development, which is the specific idea that you must write a test for the piece of code you're working on BEFORE you do anything else.

  18. I think you make a mistake by thinking that prototyping and testing are mutually exclusive. Of course it's useless to write tests when you don't know the specs of your software and what it should do. But then again, why should you write any production software without knowing this?

    Prototyping is a[nother] tool to help find out the specs of programs you end up writing. Tests are a way to write those specifications, force you to think as the user of the code rather than its writer. Then this is turned into runnable code to be used as a guideline. That you have tests is more or less accidental in the process.

    To me, the best argument about writing tests first is that writing tests last is absolutely boring. Most of the time, it's a half-assed, useless job. Writing tests first is the only way to make it somewhat fun.

    1. I thought that’s what the whiteboard, dry erase marker, pseudo code and flowchart were for? Why does hammering out the client’s business expectation require coding at all?

      If anything I would be determining what the app is supposed to do and resemble according to the client without even having a computer in the room. Same way I storyboard. When I get to the portion of the job that requires an IDE I’m there to write the app, and then I’m testing what I write. If I’ve done my job my testing actually exposes the results I want, based on what is already known about the functionality agreed upon before a line of code was even written.

      I’ve just quit a job recently with 500-1500 lines of code average times 400-500 files as a lone programmer for the past 8 years. And that was just on the front end, not including the COM+ on the back, the standalone .exe programs running off of multiple servers all over the system on schedulers, or the SSRS installation and never had a problem with my methods. The only time I did any getter/setter stuff was in VB6 and most of the testing for that stuff was done in standalone .exe programs created to test that specific functionality after I already had the actual verified questions in the app I needed the tests to answer.

  19. I agree with your first statement no matter how good a practice ,process or technology is its not ultimate solution.Test driven approach has its own advantage but its not perfect for every scenario. My experience says its flexibility and hybrid nature which gives you option to use agile, waterfall or test driven based on needs and suitability of situation , resources and environment.

  20. Earlier, I presented the argument that we require unit tests at each level of granularity in our system, to ensure quality and consistency (which is, after all, what we strive for, right?). Nobody has presented a counter-argument, so let's assume this for the moment, and discuss Test-Driven Development (upfront unit tests):

    First of all, I don't view TDD as a development methodology in itself, but rather a "technique" (not unlike, say, design-by-contract) which can be used in many development process methodologies, together with other techniques.

    There are several overwhelmingly compelling reasons to write one's tests first:

    – It enforces a deep understanding of the contract ("requirements") of the component to be written by the developer, which in itself enforces that requirements analysis / design actually be done. How many teams jump right ahead and start coding, only to have to refactor later? Put the overheads where it belongs – requirements analysis.

    – It greatly speeds up and simplifies the development process – for the developer now knows precisely when he is "done" (there is no uncertainty, no unnecessary work is done, but nothing is left out). Of course, this depends on having "good" tests (sufficient coverage) – which is a separate and complex topic itself.

    – In technology-neutral metalanguages like URDAD or UML, we can express the "dynamics" of requirements sufficiently, but few programming languages (other than WS-BPEL Abstract, which is a bit dead in the water) have artifacts that can express such requirements. Take, for example, a Java interface or WSDL contract: They express only a small part of the requirements. The test suite becomes an essential artifact to express the dynamics (interactions) of the requirements. We should express the requirements *before* implementing them, surely?

    – One's framework is then already in place for test-driven bug fixing. Got a bug? Prove it with a unit test. Once proven, fix it (which you know, once your test passes), with the assurance that the other 20 unit tests prove you haven't broken anything else. Nobody is going to start putting those 20 tests in place when under pressure to fix a bug. Luckily they can already be there, and your world does not spiral out of control in a frantic mess of complexity.

    Test-driven development introduces a degree of precision, control, and simplicity to the development team that is profound. Of course it's more work, and requires much more insight from the developer.

    Two things have held true in the decade or so that I have been teaching this to developers though:

    Firstly, absolutely everybody is opposed to the supposed "overheads" of this process: "But we have deadlines!" "We don't have time!"

    Secondly, every last developer that adopts the test-driven development process (not because they "have to", but in their hearts) rises to a new plateau of understanding – they speak a different language when it comes the coding, and approach problem solving differently. All of the sudden, it's obvious why other engineering disciplines do things this way as a matter of course. And us software engineers have infinitely better tools than our distant relations in mechanical, chemical, civil engineering – We can automatically build and break components at zero cost!

    They never go back to this uncontrolled hacking that most people call "software development".

  21. TDD is not "The One True Programming Methodology", but clearly one of the best if you are doing OO design.

    At my previous workplace we were doing a lot of TDD, and at first I didn't like it at all. Later I recognized that we completely misused the whole methodology. The problem was that we didn't allow the test to influence our code, which resulted both unmaintainable code and tests (lots of hacks in the test because the code was not unit testable (and not reusable, not flexible). So insisting on unit testing, but not allowing to change the code because of the test, is two incompatible mindset, which will result lots of stress in the test.

    Later I started to get into in TDD more deeply and I learned how can I "listen to the test" and alter my design because of it. Overall, it helped me to understand OO in a better way. (At that time I did not consider myself as an unexperienced programmer, and I though I already know everything about OO, but it was not the case).
    So, now I consider TDD as a design process which helps me to develop good OO design.
    (I emphasising the OOP intentionally, because I started studying functional programming recently, and I still not sure whether TDD offers the same benefit in that world or not).

    Regarding TDD, I recommend a book called: Growing object oriented software guided by tests

  22. This article isn't written "brashly", rather its written with not much merit.

    First three paragraphs the author is just trolling.

    Fourth paragraph you admit to doing TDD is good sometimes.

    Fifth paragraph and you are just taking things too literal. Not all development should be TDD. API Discovery, testing the waters, is okay to not being TDD. There is also other things that cannot be TDD like GUI related work. Also this is not a huge portion of coding so don't hold on the the 20% of development and edge cases and claim TDD sucks based on those.

    Sixth paragraph, you again say TDD is good sometimes, then you troll again.

    Seventh paragraph, you don't back this claim up.

    Eigtht Paragraph….oh fcuk it, I give up…this is useless.

  23. Great post. For those that are caught up in the hype and want to imply that you some how are not a good coder or have poor design skills because you don't strictly follow TDD, you're delusional and you're in severe need of help. Building software is all about trade offs. There is no ONE AND ONLY way to program that beats all. It's all about what you think is best for you and or your team. For example, you can justify all your decisions by starting each sentence off with "But Martin Fowler said____"….or you can grow some balls and make a decision for yourself. Stop letting people control your lives as a developer and go against the grain sometimes!! You guys crack me up. Calling this man out because he has the balls to admit that sometimes he doesn't think adhering to the process to the T is worth all that it's portrayed to be. What a buncha chumps!! Even Jeffrey Palermo says he and his crew at Headspring don't always write their tests first. They just make sure their tests are committed at the same time their code is. It's good to do initially, but after a while you just know what the hell you're doing. I'm not saying don't write tests…they're helpful as hell…no doubt. But writing them FIRST EACH AND EVERY TIME…naaaaaah!! And that word "trolling" cracks me up. It's become synonymous with…"this guy has the balls to provide a counter argument for a widely supported practice". Get real people!!

  24. Antwan, the original author does NOT present a counter-argument – that is the point! If anybody can present an argument (i.e. a conclusion based on logical premises) that test-driven development *reduces* quality or productivity under *any* circumstances, let him come forward.

    As it stands, it reads a little like "sometimes I just don't feel like producing good quality work, because it's too much effort. I think striving for good quality sucks."

    Do you also sometimes say: "Designing things before I build them EACH AND EVERY TIME… naaaahhhh!!" ? Where does it stop? "Satisfying the customer's requirements EVERY TIME? Naaahhhh!!. Writing code that compiles EVERY TIME? Naaahhhh!!"

    Good heavens, man, what kind of software do you build? You are the one that has to "get real".

    Did you even read my prvious post? Can you present a counter-argument to any of my statements?

    1. Here’s a counter argument that shows how TDD is not better (in fact, is worst in terms of quality code produced and productivity) than Test-Last:
      http://theruntime.com/blogs/jacob/archive/2008/01/22/tdd-proven-effective-or-is-it.aspx

      – The control group (non-TDD or “Test Last”) had higher quality in every dimension—they had higher floor, ceiling, mean, and median quality.
      – The control group produced higher quality with consistently fewer tests.
      – Quality was better correlated to number of tests for the TDD group (an interesting point of differentiation that I’m not sure the authors caught).
      – The control group’s productivity was highly predictable as a function of number of tests and had a stronger correlation than the TDD group.

    2. As it stands, it reads a little like “sometimes I just don’t feel like producing good quality work, because it’s too much effort. I think striving for good quality sucks.”

      This is called begging the question. It’s a severe failure in debate strategy because it assumes facts not in evidence and pretends they’re verified true then bases conclusions about their truth on themselves.

      You’ve not only got no proof his work isn’t quality because of an absence of TDD, you’ve got no proof his work isn’t quality at all.

      bool poppycock = true ;

  25. I study, practice and research in academia about TDD for a long time. I can say that I am a TDD evangelist. I do believe that TDD makes such a difference in my development environment.

    However, you can't "Always" and "Never" in any software engineering context. If someone still thinks that TDD, or any other practice should be done 100% of the time, s/he is wrong.

    TDD is a tool as many others that we have. You should use whenever you need it. It is up to the developer to identify moments that he needs to use TDD and moments that he does not need to use. This is what I expect from an experienced developer.

  26. Quote (Mauricio Aniche):
    > However, you can't "Always" and "Never" in
    > any software engineering context. If someone
    > still thinks that TDD, or any other practice
    >should be done 100% of the time, s/he is wrong.

    Of course you can! One can logically argue that Test-Driven Development will *always* produce higher-quality output. Of course, higher-quality means slower, more expensive, and requires a stronger class of developers.

    One should thus not say "one must always follow test-driven development" just like one should not say "one must always pursue the highest quality".

    But if the decision is indeed made to pursue quality – and with very complex software projects and small teams, this is a good idea – nobody has yet presented an argument that test-driven development will not always result in higher quality.

    1. I’m late to the game here but wanted to comment on this post, having read your previous post as well.

      I emphatically disagree with your statement:

      “Of course, higher-quality means slower, more expensive, and requires a stronger class of developers.”

      Higher quality code means exactly the opposite of this statement. Slow code is easy to hack at but hard to get working with any degree of certainty. Quality code is so easily understood as to clarity of purpose and fitness for release that extending it is quick and cheap. quality code tends to lend itself to new and exciting feature development in ways that poor code makes impossible.

      My mantra to software teams is that “you learn what you do”. What a team chooses to do can either serve as a building block to higher performance (better code faster) or lead to stagnation or worse… toiling. The insidious thing is that from many (ignorant) perspectives high performance, stagnation, and toiling can all look and feel the same. They all require the same amount of effort. The difference is in the return for the effort that each provides. Until a team has experienced high performance they are unable to asses their current state accurately. But once things “click” there is no going back without a fight.

      A team that is producing poor quality code will likely feel overworked, unclear as to requirements and exhibit cynical and defensive behavior.

      Note to Dawid Loubser, I’m being facetious. I agree with your perspective on testing. 🙂 Though I do disagree with quality = slow.

  27. Great article! The article's point is quite clear IMHO, but the way it points out what we are doing wrong is a bit harsh.

    I can't agree more with the purpose of automated testing, that of helping you to develop good software. It is of no avail following a cook recipe in the wrong context.

    Also, as Hanley, chadastrophic and Magly say you should always strike a balance between benefits it provides and costs, for example when doing a webapp, it is not the same as doing software for a pacemaker.

  28. Dawid Loubser,

    There are plenty of counterarguments possible (and necessary).

    First, there are many ways of writing correct code and insuring that already written code is correct. Only a fraction of those involve unit tests. Claiming that TDD is the best practice without comparing (or even knowing) about other practices is pure arrogance.

    Second, it is perfectly possible to write horrible code while having 100% unit test coverage. Your code can be unreadable, hard to navigate and overly complex, for example. Another common illness of TDD practitioners is the tendency to sweep bugs under the carpet, moving them to config files and the database. Yes, yes, your code is perfect and super-configurable, but if your application fails to reliably work in real life, I don't really care.

    Needless to say, these are not theoretical issues. I'm speaking from experience.

    Finally, it's absurd to pretend that "real" engineers always deliver or even want to deliver 100% correct code. If your hardware fails in 1% of all transactions, is it reasonable to attempt to fix some software issue that affects 0.01% of all transactions? What's the cost of fixing hardware? What's the costs of fixing software? What's the cost of failures? That's the kind of reasoning I would expect from a real engineer.

  29. Frankly, most programmers are scared of writing tests, and unfortunately TDD many times has been presented as a rigorous discipline or worse, dogma. This just scares them off even more. While I'm actually an advocate of pragmatic TDD (or even TFD where it makes sense), the real value in TDD is the rapid feedback you get. It reduces the mental load on a developer (and in my professional opinion can produce better designs). For a moment, don't think of the unit test as a test. Instead, think of it as a "mini prototyping environment" where you can quickly write a piece of code, immediately execute it, and see the results. If viewed this way, pragmatic TDD can become a very liberating exercise as you feel less constrained and more willing to explore.

  30. Enternally lost said it well when he/she said think of it like a mini prototyping environment. I remember back to when me and other devs would write up little console apps to test out something, but wrtting as a test is the way to go.

    Where it gets hard is when you have to mock! It can get very painful and I have seen first hand from a top TDD guy that it may show green in the test but once its in produce you can still get bad results!

  31. I don’t think this is an argument against TDD at all, nor is it specific to TDD. Cargo-cult ing is bad. When using any tool/methodology, understand why you are using it first. Never believe anyone who speaks of the “one true way” in any context. Problem solved.

    Now, use TDD (judiciously) to write better code faster.

  32. I’m wondering if you ever wrote that article about testing large web apps, because I’ve been thinking about how to do that myself. I can understand how TDD, or unit testing in general, works well with modules and functions, but I’m having a hard time seeing how to do it with a web site. When a programming change breaks a web page, it’s often not that any function failed, but that a tag change screwed up the way the divs lay out or something like that. It seems like it would take awfully complex automated tests to ensure that a web page “works” with enough confidence to make manual checking unnecessary. Or is that not the kind of functionality you’re talking about testing? Thanks.

  33. Through experience I have found that TDD seems to have turned off peoples heads to the wider design scope, and code normalization has pretty much evaporated. The devs practising it religiously and waving it like a flag to “prove” they are the best engineers seem also to produce massively less software and the quality seems to be no better (if anything bug lists seem to be getting longer – as devs rely on green lights to tell them their software is working and not by running it). Production bugs are becoming more common as people rely on tests to tell them everything is ok and being given false positives.

    I believe it has become a crutch for weak developers where they can not be blamed for their poor code as long as all the lights are green. Using it as a mark of the quality of a developer is rediculous in of itself, I have met both good and bad developers working under TDD and they were just as good or bad before using it. I think there seems to be a miscomprehension in the latest batch of developers on what acceptable quality is. Some of us have been building complex enterprise systems for over 13 years and have never had a bug list that stretched to more than 20 or so items and the software has done precisely what it was created to do. Developers used to be more like surgeons, you didn’t make mistakes, you didn’t need 1000s of tests because the only faults were usually anomolies (probably the same kind of problems automated tests seem to miss). Now it seems you just keep bumbling forward randomly until you get the right results.

    Sorry for all those people who think TDD makes them elite, but it certainly gives the rest of us a good laugh along the way. Trends suggest software costs and failure rates have risen since Agile, Scrum and TDD became more prevalent. Which doesn’t particularly surprise me. Try getting it right first time a bit more often and you won’t have to rely on gimics to keep your egos inflated.

    1. Agreed. The reason I don’t agree with TDD is because apps don’t get broken by devs. They get broken by users. This is an age old Waterloo of programmers and TDD is to me just another attempt to get around it which, if diagnosed with a harsh eye, smacks a bit of a disdain for thorough error trapping and debugging.

      But those things are inevitable. Developers and most anyone else building something fail the second they start thinking users are going to fail in a nice, well ordered, methodology-conscious way. They never do. They do the one thing your team sat there laughingly claiming they’d never do it. But one way they won’t break it is certain, and that’s before it is sitting in front of them.

      The whole thing is just seriously counter-intuitive to me as a person who realizes I can’t think of every way my functionality can be reamed by an end user. It takes arrogance to think I can vet functionality through some clean room theoretical test. And I don’t need such tests to determine what the app is supposed to do. I get that from whoever is commissioning it telling the team what they want, and the team mapping it out graphically either through pseudo code or through flowcharting. Business rules are not code based – they are theory based and expressed in English on the client side first and then the bridge for them is developed with the goal already in mind.

      I don’t see how TDD eliminates any code rewrites or saves any time when the second the user does something or asks for something the preliminary code didn’t consider it’s time to write or rewrite more code.

    2. “I believe it has become a crutch for weak developers where they can not be blamed for their poor code as long as all the lights are green.”

      All methodologies are an attempt by managers to get usable work out of mediocre people. If managers waited until they could find enough first-rate people, the job wouldn’t get done.

      TDD is targeted at programmers who don’t think to test their code at all. So is Unit testing.
      Vocal evangelists are demonstrating the Dunning-Kreuger effect.

  34. I think the man practicing the methodology is much more important than the methodology itself.

  35. I still don’t get how TDD can work… how do you develop your tests before you develop your tests? it is a lame concept for lame programmers who want to pull the wool over the eyes of non-technical types.

    1. Julian,

      I’m assuming you meant to say “how do you develop your tests before you develop your _code_.” Your current statement doesn’t make any sense.

      But the way you develop a test before you write code is to know what it is you want the code to do and then test for that condition. You then satisfy the condition so that the test passes. If you have ever written unit tests _after coding_ it should be a short leap to understanding how TDD works.

      Without TDD you write your expectations after you code. If you think about it this is quite backwards. It is subject to all the follies of the human tendency to rationalize decisions and behavior. With TDD you write your expectations first. This focuses development on solving an incremental set of problems while shielding the developer against regressions (as you have a battery of tests relevant to your current efforts).

  36. nope, I meant tests, not code – of course you can change develop twice to code if you like… assuming you are going to code your tests (ie: develop them) how do you test before you develop? Of course I am assuming as per all the examples where the tests were “coded / developed” as opposed to using a generic 100% code free testing Tool that is very restrictive and dedicated to a specific language / environment.

  37. All of this debate about TDD (and about Agile, OO vs Procedural code etc. etc.) diverts time and attention from the three things that really matter, and make the difference between a skillful developer and a so-so developer:

    1. Understanding the problem domain
    2. Understanding your programming tools
    3. caring about doing a good job

    It is lunacy to imagine that if the “methodology” and development process are right they will somehow guarantee that mediocre developers produce quality software.

  38. “Making tests a central part of the process because they’re useful to developers? Awesome. Dictating a workflow to developers that works in some cases as the One True Way: ridiculous.”

    I think the point of Test Driven Development is entirely that it’s useful to developers. If you’re arguing about whether writing tests after coding is just as good as writing before, then I would agree, that I wouldn’t really care, *as long as the tests get written*.

    In practice, what I have seen with people who write the tests after, is that they often don’t do it, and after means never. And frequently they check in code with flawed code paths (that were never tested), and then throw it over the wall to QA and hope that someone doing manual testing will catch problems with the code.

    Who determines whether tests help the developer or not? Developers who have difficulty writing tests (or getting into the Test First mindset) will argue that the tests don’t help. And when they check in code with errors that a unit test would have caught, they treat as an unavoidable error. How many times have you seen someone accidently switch the meaning of a boolean flag, and get the exact opposite result they wanted? I’ve seen it dozens of times. A unit test written first would have caught that problem.

    Testing is engineering – creating structures that guarantee the code does specific things. But as the dictum goes, “You only need to test the things you care about working”.

  39. Here is what some of today’s job descriptions really mean:

    – “Agile Project Manager”: “I really suck at planning. I can only plan one week ahead. I also suck at managing my client’s needs and guiding them through their expectations, and most importantly, making sure they stand on the same requirements throughout the duration of the project. So to justify my lack of professionalism, I call myself ‘agile’, and blame any failure on this technique.”

    – “TDD fanatic Developer”: “I suck at analyzing the impact of my code changes. I can not see the whole picture of the system, nor assess what areas QA should test when I make a code change. I am too narrow minded to see the system as a general thing. If my code changes end up breaking the system, I just protect myself saying that ‘All test passed’. I surely never read the General System’s Theory, which states that a system is more than its parts. Hence, a collection of tests will never encompass the whole system”

  40. I think the negative comments made here that always testing first is lunacy, ridiculous, fanatical etc. must have been made by people who have not really understood test first.

    Until you execute your code, large measures of foresight, reasoning, knowledge, luck and guesswork will be needed to determine how it will behave. Test first eliminates the need to predict anything by making the development process 100% empirical. Because you are *constantly* running the code.

    1. Test First is very helpful for those who randomly try different things until something works, for whatever unknown reason.

      Providing large measures of foresight, reasoning and knowledge is part of the job, and Test First, specifically writing an automated test as a kind of spec, is a pointless distraction.

  41. We have a large code-base from the last 17 years or so. It was never written for unit testing, or TDD. Recently one of the new devs has taken it upon himself to roll out TDD into the code, so any module he touches he is re-factoring to get testing in there.

    The re-writing is costly, and introduces bugs, and means other devs have to re-learn the new code base. It is also really slow!

    There is now a lot of extra code to maintain:

    * The test code itself
    * the extra code required to make the real code testable

    The number of extra interfaces, classes and function calls means call stacks are longer. Code is a lot harder to figure out. One single member function of 20 lines got turned into 3 classes spread over as many files. This makes it a lot harder to get a picture of what the code is doing.

    Also since our program is multi-threaded, we are getting more deadlock issues, as separation of key objects out into individually-testable objects has introduced more locking objects, and locking complexity.

    I see all these comments about TDD, and everyone comments about proving the result of some function, but we don’t just write functions that take X and Y and return X+Y. Also does your function return the correct value when hit 1000/s from 200 threads, whilst some other function is also being hammered.

    So I wonder if it’s the concept, or TDD tools (cppuint) that is the problem, since it doesn’t really even attempt to address issues such as multi-threading, deadlocking, heap corruption etc, which historically have been our biggest issues.

    The dev said it gives peace of mind, but actually I find it gives me the opposite.

    Is this just a bad implementation of TDD? Or a typical story? Any pointers?

    1. Hi Adrien,

      There are a few different issues raised by your post. The first is how to introduce unit tests into a legacy code base. The principle I have always followed (which I got from original Extreme Programming principles somewhere), is that we only write unit tests when we are fixing bugs in the code. That way, whatever risk is introduced by making code changes, is balanced by the fact that there is a business to change the code anyway. This also has some additional benefits: minimizes the extra development cost added by the unit tests, and also minimize the amount of code which needs to be tested after the fix is deployed. I think I also read (wherever I saw that principle), that usually it is small sections of the code that are frequently fixed and revisited. Other sections which are working fine do not need to be touched. I would not advise modifying a legacy code base, *only* for the purpose of adding unit tests.

      I have direct experience taking a big, bad, legacy code base (Java – tens of thousands of line of code), and rebuilding with unit tests. As mentioned above, I only changed code when adding new features, or fixing bugs, but I made sure that any change I made had unit tests, AND I also wrote unit tests to make sure existing functions of the same code worked (before I added the new features/fixes). I can tell you we had 2 very direct benefits – lots of spaghetti code was gradually refactored into neater, better designed units, and the quality of releases (as measured by number of bugs in production) improved greatly. Because we now have a large body of unit tests, we have the courage to make major changes to the code, without worrying (nearly as much) about causing major production outages (which happened frequently before).

      So that brings up a second point of your post – the refactoring and coding with tests introducing bugs. Unit Tests require just as much thought as the production coding itself. If you write tests for the wrong things, or incomplete number of tests, then your code will be no better (and maybe worse than it was before). The only guarantee you get with unit tests, is a proof that the production code does what the unit is testing for, no more or no less.

      The last point I’ll address is that you said your main problems are multi-threading, deadlocking, heap corruption. If you can replicate any of these failures by writing a test, you can be pretty confident that once you get the test to pass, you’ve fixed that problem (or at a minimum, that instance of the problem). At least one benefit you get by adding a unit test for this, is that if someone else later makes some innocent change to that block of code, and reintroduces the deadlock or synchronization problem, you will have a visible test failure to let you know that. In the Java code base, I have definitely been able to use unit tests to replicate production thread race and deadlock conditions, and use those tests as an indicator to show when threading issues were fixed. (And also I saw those tests fail when some other change was made to the code, which broke the original fix).

      To summarize, writing unit tests does not allow you to shut off your business sense, or common sense. But they do give you a safety net for maintaining a code base for a long period of time.

      1. Clarification of post above:
        we only write unit tests FOR LEGACY CODE, when we are fixing bugs in the code

        is what I meant to say. For new code, we always write test while writing the new code.

  42. Hi

    thanks for your insight. In terms of testing for deadlock, heap corruption things etc, we are often talking about events that can be time-related / race conditions, so typically we’ll set up a bunch of machines to bash the @^$% out of our software for long enough for us to feel comfortable about it. It can take several days before we gain a level of comfort.

    That sort of testing really doesn’t fit into a unit-testing or even CI / automated (in terms of break build if test fails) model.

    Also, how do you unit test a void function? There’s no return. You’d have to somehow inspect what went on inside to see if it’s correct, and I’ve seen code put in that provides dangerous access to things that shouldn’t be accessed, just to enable a test to be written, so the requirement to test it prompted the problem. Those accessors then can get used by others outside of test code and create problem.

    I guess we must always be judicious about what we test, but then you get people saying things like “if it can’t be tested it has no value”. So there’s a lot of pressure to make absolutely _everything_ testable.

    Actually many of our bugs came about due to non-threadsafe reference-counted copying of std::string in msvc 6. Or missing locks around shared containers (which includes strings).

    I’m basically still in a research phase myself about the overall merit of TDD for our code-base – whether we want to proceed with it in our code-base, and / or to what level, because it does have a cost, and the benefit is difficult to predict based solely on experience about the historic costs of tracking and dealing with past issues that would have been caught by such unit tests we currently see as being practicable. I don’t want to say no to it, because I understand there are benefits. I am concerned also though about what it seems to do to design (over-abstraction and increased line-count and complexity). This can create problems when maintaining the code if it becomes less readable / understandable.

    A lot of code becomes decoupled where it logically is coupled (e.g. single classes that do several things with the same data get split out), and glue classes or action classes created to bridge the gap we put in.

    Surely there’s a way to do it that doesn’t create these other issues?

  43. There are other paradigms that don’t rely on injection and decoupling to achieve a good overall automated test coverage. Testing at all the inputs and outputs as a closed system can work in some situations or breaking all code out into static input/output operations that are intrinsically testable and relying on integration testing to cover the framework for those operations. Lets not forget the goal is the quality of the software, if this is compromised by making the code testable it doesn’t matter if the tests pass or fail, the software is already sub standard. Ignore the hype, automated testing is useful, but not at the expense of the software’s design or maintainability. If you can achieve both, perfect, but the reality seems to be that solutions suffer heavily from being designed for test rather than for function.
    TDD test first (the purists view) really does not stack up commercially. Even in a single iteration things can change enough that you can end up writing the tests 4-5 times. Despite what some people would suggest this does eat time and most departments are running on a budget and deadlines. If you have a perfect specification, with perfect use cases then maybe it will work, but lets face it, that doesn’t happen very often.
    The same thing applies now that has always applied. Think through what you are doing and pick the best approach. Anyone who starts spouting “you must always do it this way” then their judgement is flawed. I would also say beware of what you read online, it can give a distorted view of reality. There always seems to be a bunch of people more than willing to regurgitate what they have read elsewhere, fashinistas of programming, all flash and no function. Don’t be surprised to find that most of them are students, hobbyists, or juniors who have never delivered an entire commercial project in their lives.
    So in conclusion, don’t tear your solution to pieces to make it testable, quality will probably suffer rather than improve. Where it is not intrusive test the high risk areas and test exposed interfaces for compatibility. Don’t go testing every line of code, you’ll blow your budget and end up rushing and making mistakes elsewhere to make up for writing 1000s of tests that basically just prove that if statements still work. Tests can also be wrong or incomplete, they support manual testing, they don’t replace it.

  44. While I find TDD strange and cumbersome to how I’m used to developing / testing myself since I’m a code cowboy more than a team coder, I can see how TDD or BDD can help teams or new developers quickly locate problems instead of applications just existing as black boxes to everyone except their authors or those patient enough to study the design patterns, flaws, etc quickly enough not to cause headaches. I see it as now or later someone (you or an unfamiliar team member) is going to have a headache interpreting requirements so why not impose it on the initial developer(s) who will need to re-articulate and debrief the contract their code fulfills not just silently implement it.

    1. One more benefit to add, which might also be useful to you as a “code cowboy”, which is the regression testing that the test suite gives you when you go back to modify your code in significant ways. Instead of having to retest multiple scenarios by hand after you’ve modified that code which has been in production for a year, you can just hit the “Run” button and if it shows green, be confident that your change didn’t break something that was previously working.

      I’ve coded both ways, TDD and non-TDD. I find that if I don’t use TDD, then I end up manually testing everything anyway (and often in the debugger, so I can make sure I’m hitting the branches of code that I expect). With TDD, I know exactly what code I’m hitting. It is more tedious and slow to develop this way, but the accuracy is very high.

  45. “Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality first in order to be able to write those tests, or you write a test that requires most of the software to be finished, or you cheat and fudge it. The former is the right approach in a small number of situations – tests around bugs, or small, very well-defined pieces of functionality).”

    I think you’ve missed the main point of how to do TDD. Rather than “zooming in” which you don’t want to do since you don’t really know what the requirements are at that level, let your tests drive the interface design for collaborators which at this point will just be mock objects. In this way you can write those top level tests without the need to have most of your software implemented, and then move down a level and write tests for the mocked interfaces etc etc. In this way each interface you design is in direct response to the requests made from the level above.

    I cant really see any problems with the methodology I just outlined, but Id be interested to hear what others think.

    P

  46. On a large project that spans several international teams we have found TDD to be a solid practice. We approach it not from a “Build a Test Case” perspective, but more from a Requirements Mapping need. Take a Requirement/Task from the Backlog and write a test or tests to satisfy that requirement. The code that then passes that test has satisfied that requirement. Our coverage isn’t 100% (Closer to 85-90) but at least we don’t run into a test later or test never situation. All our products have to pass FDA scrutiny and TDD has managed to help along the road. We cover the code and SQA covers the BDD and Acceptance. None of us are drones or cult members, but many of us see the wisdom in “Test-First” approaches.

  47. I don’t think TDD is good, we are compelled to learn so many things for our development that when we start designing we should go through the process without the need of Unit testing. Let’s remember that when we paint or write as artists (if we are such) we do the right thing as if were poured from genius, than we adjust, but the flow must not be interrupted by ASSERT IS TRUE… ? No thanks. I like pair programming instead with the right partner.
    So thanks for your lines Author!! I don’t feel so alone now.

  48. TDD is something that I recently heard about, touted by the new CTO at place where I used to work. Before I looked up the definition, I thought that it was a good way to develop a software. Later, when I really started looking into it, the following became my thoughts on the matter:

    – unless TDD is just unit testing, the developer should not be writing the test cases
    – from the requirements document, there should be 2 parallel tracks:
    1. developer starts writing the code to address the requirements
    2. QA starts writing the test cases for ALL of the requirements, keeping as much of it automated, as possible
    – each version of the code should be subjected to the entire test suite
    – at first, 99% of the test cases will fail and as the development effort matures, the failed test cases will tend towards zero

    Testing is an offensive operation and coding is a defensive operation. This is the reason why QA did not, traditionally, report to the Project Manager because of the conflict of interest i.e. QA’s objective is to break the system as much as possible and should come at the software with a sledge hammer. I don’t see how a developer writing his/her own test cases can be that aggressive.

    Probably the mistake that I made was to think that TDD was just developing of ALL the test cases and using that to periodically and frequently checking the entire software for completeness.

  49. Here because I googled “I hate test driven development”, I wanted to check if I was crazy, not sure yet …

    Thanks for the voice of reason! The only places I see TDD being applicable

    1. Large teams integrating code of different qualities and trying to make it work (trust is lower)
    2. Product development, where a break fix has implications across the user base (stakes are higher)
    3. Small enhancement to a large messy ball of mud (CYA)

    All other times, lets try succeeding before we fail please?

  50. Why does this come up fourth in my Google results….GoogleFail.

    This is nothing more than a subjective, personal, and highly emotive rant, with no examples or logic used to back it up. Using brief and pointless phrases like “fail” shows your unprofessionalism.
    You’ve clearly been lectured by some TDD nut, taken it personally, as though he’s making out you’re doing things “wrong”, and now you have a huge chip on your shoulder.

  51. Madhuri,

    “Seek and ye shall find”. We form our assumptions and biases and then proceed to go forth and “discover” those things that confirm them. To argue against TDD you must either argue against automated tests all together or argue _for_ their proper and most beneficial usage. Anyone who is both for testing and against TDD does our trade a huge disservice when they do not present their own testing practices for edification and review.

  52. TDD is predicated on the notion that consistent adherence to a software’s contract throughout software maintenance and revision will lead to more efficient higher quality software. However, what TDD adherents seem to ignore or be unaware of is the fleetingly short lifetime of a typical software application in the industry. One year is ancient and even if a software application performs perfectly, unless it is overhauled or completely replaced with a brand new implementation users will simply move-on to something else because of this frustrating quality that users have: a never-ending lust for the new and the shiny. Therefore investing 2 months to write non-value add tests for a product that has one year to live can hardly be considered efficient or a good investment.

    1. One year? That seems like a very broad generalisation, and seems highly dependent on the nature of the product.
      I don’t know what industry you’ve been in, but the last four companies I worked for had codebases 5-10+ years old. All of which are successful companies, ticking along just fine.

  53. In his book, Extreme Programming: The case against XP, the author explains where this whole XP nonsense came from, and TDD was part and parcel with that. That fact is, XP came from a gigantic failure of a payroll project. Period, end of story. So if they failed completely at that, what makes people want to copy their methodology?

    http://www.amazon.com/Extreme-Programming-Refactored-Against-Experts-ebook/dp/B004O6LQ5O/ref=sr_1_1?ie=UTF8&qid=1391725766&sr=8-1&keywords=extreme+programming+refactored

    1. I think your comments need some clarification.

      The project was going for 3 years before Kent Beck joined.
      They almost hit the 1 year deliverable afterwards.
      However, a key staff member quit and could not be replaced due to burnout/stress shortly after.
      After initially banning XP, DaimlerChrysler have since resumed its use.

      http://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compensation_System

      And lastly, XP doesn’t guarantee success, just increases the odds..

  54. Most of the people arguing here forget that poster claims test-driven development(Tests driving the design) is bad not testing itself. I agree with him that tests cannot produce optimal architectural designs however it they may create classes that are clean and concise. You can’t expect designs driven by tests to create software architectures that are easy to maintain against changing requirements over time. Thats why software architects exist and have a different skillset than developers regarding how to create good architectures. Maintainable software is the product of the collaboration of good software architects and developers and it requires effort for each project because each project can be very different in terms of requirements.

    So don’t expect to get great designs just by writing and analyzing tests because it is just impossible. However tdd is good for refining the interface of a class and tests created form a contract for the class that is always checkable.

  55. Good point. The whole argument about superior engineer, toy softwares are a bit at extremes. Just like people, software products and projects don’t have to have a one-size-fits-all solution.

    Often it helps to quickly come-up with something about which my understanding is not yet crystallised (this is the “exploratory phase”) and then iteratively polish it to a point where specifications or requirements can be asserted (the “crafting phase”) against.

    Every developer as an individual and any product start-up initially face this “exploratory” phase because of inadequate understanding about the requirement or market respectively. TDD is a bottleneck at this “exploratory” phase where creativity has to flourish and uncertainty has to resolved by research. Once the “craft” phase kicks in, TDD is an excellent fit and can be a life-saver.

    For agencies, service companies at large, there is some certainty that the entire business wouldn’t just pivot. This certainty helps to kick in the “craft” phase early and makes TDD so effective. In startups or product ventures where there is a business uncertainty, no point in building a Cucumber/RSpec castle. Similarly, think about developing an iOS app for a product startup that takes a snap of a text-recipe and figure out the nutrition content of the it. Does TDD makes sense here? Nope. We need to kick in the “exploratory” phase first.

    Like most of the things in life, TDD is good or bad depending on a specific scenario.

  56. This issue I have with TDD is the name. If you have to develop the tests, you cannot test the development of the tests before you develop the tests. There is nothing to suggest that development of the tests is not what is meant by the word development within the name TDD.

  57. I first learned to program in 1974 and have worked in software development since 1979. There have been many fads and methodologies in that time. Most of them are promoted most vigorously by people that are NOT expert developers! [I find it extremely irritating to to be told how to structure my woirk by a manager that loves TDD or an Agile consultant, neither of whom could write a working 10-line shell script to save their lives.

    Almost all of these great methodologies have faded away or been discredited. TDD (and Agile) are just two of the latest fads, and they will be forgotten in due course. It is naive to think that there is any “silver bullet” and it is significant that it is the youngest members of the profession that are most enthusiastic about the ability of the “right” methodology to fix all their problems.

    New application areas open up all the time. Modern tools and languages (NOT methodologies), along with amazingly powerful hardware, relieve us of some of the more mundane parts of the work, and speed everything up. But fundamentals of good software development do not change.

    Of course testing is necessary. But APPROPRIATE testing. And that depends on the application, the languages, the people, the timescales, the level of reliability required and so on.

    [Do not even get me started on the infinite regress problem. How do you you test the tests? And please do not tell me that tests can b e generated automatically. Yes they can. But they test only the most trivial aspects of the code, which a good programmer simply gets right. They do not root out real life bugs, which often arise from unexpected interactions between components, or unintended consequences of the basic design. Those automatically generated tests are the equivalent of a good writer checking the spelling of every word, and the construction of each sentence. Unnecessary and time wasting.]

    Of course it is not always necessary to have full detailed documentation or a comprehensive design before starting development. Of course the customer should be heavily involved if that is possible and feasible. We do not need a new methodology to tell us to do what we have always done.

    Of course the statements of the Agile manifesto are true. What is more, many organisations were in reality MORE agile, before they were lumbered with all the paraphernalia of SCRUM, Sprints, Stand-ups, and all the rest.

    1. Most bad software “fads” (e.g. RUP, function point metrics) thankfully died out far more quickly. And yet 15 years on, TDD continues to grow slowly but surely. It’s past the fad stage.

      Whether or not you think TDD works, companies are tired of trusting that developers know how to build anything with quality. And judging by the voluminous code I’ve looked at, I’d have to agree with them (never mind that they often create the conditions that foster bad code). So they seek something better.

      I really don’t care if you want to persuade others not to do TDD, but it’s telling that no one has offered anything better in the past ~15 years. I don’t doubt that you’ve found success working the way you have for 36 years, but the typical codebase produced by typical developers with an attitude that they’re “good programmers” is… typically crap.

      I’ve practiced TDD on large and small systems as “just” a full-time developer at small companies and large, and apps small and large, since 2000 (at which point I’d developed software for almost 20 years without TDD). I’ve seen its warts and weaknesses (none of us ever say it’s a silver bullet–it’s a fraction of what’s required to build quality software), yet I’ve also seen people reap its considerable benefits when practiced properly. I’ve also sold it, helping others learn it through writing, hands-on mentoring, and training during that time. (Does that disqualify me from an opinion? Only if you’re trying to convince yourself that it does.)

      I stopped worrying long ago about people who didn’t find the value in it, because I’ve seen how ecstatic folks were that *did* find the value. Vice versa: If you don’t want to do TDD, don’t do it, and stop worrying about people who do. People who need to attack things that others find valuable are bigger religious zealots than those who claim silver bullet capabilities–particularly when they’ve not used those practices in any real sense.

      Ultimately it’s your team that should be deciding what’s important, and there are plenty of teams who won’t ever touch TDD. Fine by me. Tom, you should continue what you’re doing–who would dare suggest you should try anything different after 40 years?

      However, it is odd that you suggest “we were already more agile.” You must have been on the rare team. Once I graduated from one-man shops (up to the mid 80s) into larger dev efforts (late 80s-90s), I saw most others embrace absolute time-wasting nonsense (e.g. spending 3-6 months producing a document with no feedback), abusive practices (e.g. managers imposing estimates), and bad code. I *still* see the bad, sad, costly code every time I go into the average development shop and look at their codebase.

      As far as TDD: the “infinite regress problem” does not exist in real practice. And yes, auto-generated tests are worthless, but we all learned that quickly about 10 years ago. Regarding TDD and bugs, Ward Cunningham put it best–TDD doesn’t “catch bugs” in a deployed system, it prevents you from integrating your (logic) defects into the system in the first place. For me, the biggest asset of the practice, however, is that it supports my interest in IID by allowing me to continually fix small and large design problems (instead of succumbing to the fear-driven adage “it ain’t broke, don’t fix it”).

      As far as Scrum (which I am not a great fan of): I agree that it has its share of potentially abusive practices–but that’s what happens when people misinterpret things and slavishly adhere to the simple trappings that they are capable of quickly understanding. That’s how we got the bad interpretation of waterfall (c.f. Royce) in 1970.

      Most people miss the whole point and think that the practices are the end goal, or even any sort of goal, of agile software development. The goal is to be able to deliver sooner and more often with higher quality, so that you beat out the competition. The extremes between heavyweight processes (RUP and even more onerous predecessors) and hacking (aka “trusting cocksure but misguided developers”) never did that.

      The practices are a prescriptive starting point. If you’re still doing all those same practices the same way (say) six months into an initiative, you’ve failed the spirit of agile.

  58. I am sorry , my english is not very good , so I dot not understand you all .

    I think TDD is useful in very small project .But in large scale project , good design is better .

  59. I think software quality would improve if some of these evangelical types would put their energy into learning to design and code well, rather than proselytizing for whatever latest “universal best method” has captured their imaginations.

    TDD is not even possible, if taken to its ultimate. You cannot possibly, ever, test every conceivable condition exhaustively. You have to be selective. You have to make a judgement call about what test cases you design and automate.

    Hey! Surprise, surprise! That is what we have been doing all along!

  60. I think people need to be clear about whether they are talking about people they have met who practice and preach TDD, and TDD as a methodology itself. 90% of the comments here don’t refer to TDD itself, but some persons personal and mistaken interpretation which you had the misfortune of meeting and hearing.
    TDD does not say “You must use it at all times!”, when and how you apply it is up to the user and their own intelligence.

    http://blog.8thlight.com/uncle-bob/2013/03/06/ThePragmaticsOfTDD.html

  61. When TDD done right, it offers a lot of advantages. If it’s done wrong, it harms more than it helps.
    I think it’s important to differenciate between good and missunderstood TDD. It is the same with OOP: I know folks that declares public properties and afterwards add getters and setters for them. They also inherited a class once half a year ago. And by that they think they code object-oriented.

    Never forget: TDD is a tool. If it’s done right, it tremendously helps to decouple your applications design. Though it is not necessary to use TDD to achieve a decoupled Architecture, especially for the more experienced developers.

    Linux is a tool. Apache is a tool. Vim/Emacs/Eclipse/Notepad++ are tools. Ruby, Rails, PHP, Python are tools.
    Don’t become obsessed with a particular one, use the tool that suits the job.

    Closing I’d like to address the obsessive mantra-style of the TDD preachers. I think it’s necessary, and actually quite helpful to ‘put pressure’ on people in public. TDD is not easy when you start – even harder for less experienced developers – and the temptation to step back to writing strongly coupled untested code is pretty strong.

  62. So… TDD is about writing tests to prove code.
    The tests themselves are code and therefore should there not be tests written to test those tests?

    Let’s be honest – how many people have written tests that don’t work, are just plain wrong or have to jump through so many hoops and mocks to get them working that they themselves become unmaintainable?

    1. “Let’s be honest – how many people have written tests that don’t work, are just plain wrong or have to jump through so many hoops and mocks to get them working that they themselves become unmaintainable?”

      If a developer can’t write a test that works, what makes you think s/he can write application code that works? Are you saying it is easier to write application code than test code, and if so, do you really believe that?

      I’ve seen plenty of code where developers through in various checks and routines because they “believed” it was necessary. An analysis of the code revealed that the conditions which they thought they were handling could actually *never* happen. On top of that, this extraneous code actually caused bugs!

      If a person is writing test code correctly, then actually, they only write code which is needed to pass the tests, and minimize the junk code.

      It takes smart people to write code. A person who doesn’t have the capacity to think like a developer will not be made a developer just by writing tests. On the other hand, I have real doubts that a person who can’t write tests can write anything more than the most trivial code, and further whatever they code they write, will most likely be quite buggy and hard to maintain. (This is based on actual experience from looking at the code of developers who I have managed.)

      So while I think it is “cute” to say, “should there not be tests written to test those tests?”, it is not any more insightful than saying, in regarding to monitoring systems, “should there not be monitoring systems to monitor the monitoring systems”?

      1. 1) You’re implying that writing tests/following TDD will magically make one understand which functionality has to be written and tested and which is junk and should never see the daylight.

        How exactly unit testing in general and TDD in particular will help you with that? How exactly they will push you towards great design than thinking and working hard on great design itself? Answer is simple: they won’t.
        In practice you’re more likely to end up with half-arsed design rested upon TDD crutches with the same amount of useless code + extra useless code for testing. Which with a great chance will be dumped anyway.

        2) “If a person is writing test code correctly, then actually, they only write code which is needed to pass the tests.”

        True if you can prove such minor assumption that your tests are complete, non-redundant and covers the nastiest corner cases and combinations of inputs. Which is not just hard, but impossible unless you already have what? Right, proper easy-to-reason-about design. And even if you have one then components should speak for themselves and their correctness should still be easier (not easy!) to prove by simple reasoning than the assumption above.

        I’ve seen many supposedly correct TDD-ed components in systems with tens of thousands of “green” tests which failed after being deployed to production because whole ranges of (non-obvious) integration use cases were simply never provisioned in design. Apparently developers were too busy to “TDD everything” and never thought about them.

        3) Don’t get me wrong. Testing *is* useful, but mostly integration/end-to-end testing. Low-level test are good for complex algorithms and data structures, but total obsession with unit testing is a waste of time.

  63. Note to self. Never hire this author. TDD works every time, anywhere, any domain, any platform. No facts and data given to counter TDD. Only whining
    Signed
    Been There Done That
    SW Development Manager

    1. @Agile Rider
      Have you read comments at all?
      There is at least on link I noticed to TDD vs Test-Last study.

      I hope I never end up in a team with you.
      You seem to be one of these coders, who always are too busy to read docs and fast to show off.

  64. This article pretty much sums up my thoughts on this topic. Personally, I am never be afraid to tell rude software engineers my honest opinion when I detect any kind of “holier-coder-than-thou” attitude. Continue to solve problems in a positive way for the organization you are working for. Mental agility will save your bosses (and your company’s) asses over and over and over. The people who are driven by other all-in-one-methodologies for success will never understand it or approve, it is part of life. Being subtly threatened for ones way of thinking, and learning to manage this on your own, without getting defensive, makes you a stronger person in my opinion.

  65. Small cowboy project don’t test at all. Enterprise level QA essential – do BDD + TDD. BDD essential as it test the system the way user interacts with it, TDD is help to design your application plus safety net for change.

    So it is not about purists it is very practical and I very much doubt you ever worked on any project that was done BDD/TDD way therefore you’ve never learnt the benefits.

  66. Too many people on here are swearing blind on TDD/BDD. I must admit that I recently wrote a pretty rich data library (not just an ORM) that I couldn’t have modelled (I tried but failed and kept refactoring, but TDD brought out the design) by writing from scratch with traditional coding. However, I took a similar approach for another engine and whilst all the unit tests/mocks looked great, when it comes to integrating it bits don’t work- it would take a lot of dedication to now change the code in multiple locations (the actual code and the tests). What is worse, is that TDD tends to generate so many tests that it is hard to find a clear path of what the overall solution should do. Tests also tend to be nothing more than a means to end with TDD rather than a statement of intent that this is almost certainly production worthy and the test is the nuts. I am actually a Business Intelligence developer and whilst much of the framework/patterns needs to be plugged in before starting the actual development – nobody would claim to be able to write a major enterprise data warehouse by planning everything upfront. For this reason it is worth considering the two sides to the argument, that TDD zealots are beating people up because they only have one perspective and are rejecting the benefits of integration testing because they think they have found their promised land but at the same time there is nothing worse than starting a new project and being told that your working solution is the solution backed up by a few integration tests (if you are lucky but you can’t run anything in case it is linked to production or will break somebody else’s code). The big advantage of unit testing (as opposed to TDD) is that it shows that the developer has attempted to consider some kind of basic evidence that the solution works but the non TDD brigade will see it is worse than useless because a poorly written test takes longer to understand than the code itself. My opinion is use TDD it if it helps you to complete but not to expect it to be easy to tie it all together or for others to be completely onside but that you owe it to others to use frameworks such as FakeItEasy to show others what the code is intending to do, I also recommend giving test methods categories such as Integration/Unit/Demonstration. Another technique I do is to actually exclude Test classes from a project once I feel that they have been superceded or served their purpose.

  67. Every programmer does tests. The different is that many programmers to manual tests and tdd programmers do unit tests. It seems that the author resents being forced to write failing tests before production code. This may be a nuisance, but trying to wrangle tests around a large untested code base also sucks..

  68. “It’s not an arcane methodology that somehow has some magical “making your code better” side-effect…”

    You’re right. It’s not a magic formula that guarantees benefits of certainty, courage or design (to write good ones you need to decouple your functions and therefore forces you to think about good design). You can still write bad tests. But to quote Robert C. Martin on this:

    “How can you consider yourself to be a professional if you do not know that all your code works? How can you know all your code works if you don’t test it every time you make a change? How can you test it every time you make a change if you don’t have the automated unit tests with very high coverage? How can you get automated unit test with very high coverage without practicing TDD”?

  69. What is missing in all the discussions I’ve seen about testing is how little attention is given to the architecture and algorithms, as it relates to the ability to validate correct program execution. The idea that you don’t architect your software first, or simply write mechanical tests that look at a tiny section of the overall program, seems very limiting. It almost seems like it is more like looking through a keyhole.

    I write real-time firmware, based on OOP principles. Dataflow, architecture and algorithms are the “tools” I used. When I trace the execution of code with the use of breakpoints, and single step, and watch variables, etc., that is not testing? Or if I use hardware tools to inject signals and then monitor the outputs, that is not testing?

    After reading a number of commentaries on different websites, it seems like there is a noisy crowd of religious-like zealots who just demand that only they know what testing is.

    Were it only so.

  70. I for one agree with the author.
    Arguing over whether writing tests first is better than writing tests afterwards is like arguing over whether it is better to right left-handed or right-handed…
    or whether it is better to write right-to-left or left-to-right…

    It serves no purpose because great software can be made regardless. There is this annoyingly over-religious thing that I see coming into the craft nowadays that’s just totally unneccessary. Right-handed or left-handed engineers can work together realtime on the same project and not-one clash will occur spawning from this fact.

    If anything, TDD makes it harder to “adapt” to certain hard realities of software engineering and fast agile; the reality being that requirements change…a lot. Or need to be started on before being fully fleshed out for competitive reasons. In this more realistic environment (in my world anyway), it is better to finish with the code first and write an astute suite of tests afterwards for your components. Otherwise valuable time would be spent massaging Both tests and code, leading to possible inconsistencies between the 2.

  71. It’s always good to read a counter argument!
    I think Kent Beck in his book “Test Driven Development: By Example” explains it well. It looks tedious and overly cumbersome at the beginning but he does step through with the goal of writing clean code. The good code design does eventually come through. Should you take steps as small as done in the book, definitely not. It’s a tool to help learn that’s all.
    I take away from TDD the concept of writing/defining tests first to ensure that you know exactly what a change/feature/defect fix is expected to do. It helped a lot with catching regressions before code goes for review even.
    Having said that, I also agree that too much of something could be dangerous. It should ideally be done to the level that allows a team to function most efficiently and productively. How to do that, I guess it’s either over some time or there’s some tips/secrets that I need to read up more on.

Comments are closed.