BDD / TDD criticized


My point was that in TDD tests govern how you move forward and in non-TDD they don’t. There is nothing in non-TDD that says you shouldn’t use tests.

I would argue that there are a lot of very good projects that mainly do regression testing, to make sure that stuff doesn’t break. I personally think that’s where the biggest payoff is with tests.

Though, looking at reality; our ~45 kLoC erlang codebase at work has very sparse test code (not even regression tests, etc.) but does fine. A test suite should probably be made, but we’re all very lazy and would prefer to be doing something else. None of us get very emotional about documentation and testing, also, so that’s probably a very big factor.


I’m not sure why this is adressed to me, as I didn’t say anything about coverage or following dogma.


The plus sign was replaced by a big black dot by the software. So what I mean is that I agree with what you say and have this as addition.


Ah, I see. I agree completely. Mostly, I think it’s about common sense and discipline.

I think lots of people find that it’s much easier to keep their discipline with testing if they think about tests first.

There are also certain processes that I think work better with testing first. Personally, if I’m making something in Racket and I realize that I would like a macro, I’ll usually spin up a new window and make an assertion that something should produce something, etc., and then start to work on actually implementing the macro. What it means, usually, is that I have to think about the API much more and how it should be used. You end up thinking more about the “usage” side and in that case it’s worth it.

Though, a lot of the time I’ll accomplish the same by simply writing my initial program in the way I want it to (thus making the API I want) and then implementing the macro.


The creator of Ruby on Rails criticizes TDD also:
Not experienced enough to be taken seriously either, eh? Really? Are you sure?
Another piece from David:


Yes I am sure. You don’t gain proper testing experience with an attitude like DHH’s, and a big name also does very little for you in that respect.


No offence but I am pretty sure DHH has way more experience with TDD then you, I would also say that he had the most influence in making TDD widely popular in the first place. I’d still love to see your production stack to substantiate your claim that you will not use none TDD code in your projects :slight_smile: .


To append to the links @StefanHoutzager shared, here is the YouTube playlist with a very long discussion called “Is TDD dead?”

Participants: DHH, Kent Beck, Martin Fowler.


And here’s a fight between Coplien and Uncle Bob. Enjoy (don’t forget beer & chips)!


Sounds like TDD should be used mainly for the critical parts of the application? What’s the community consensus on this? :slight_smile:


I really like Michael Hartl’s pragmatic advice on “when to test” from the Rails tutorial, which I think also applies to Elixir/Phoenix. I also think as you get more experience it becomes a lot more clear in what situations to apply TDD vs testing after.


I have not seen consensus, and I think TDD (test before) overall is not a good idea. Without reasoning that is only an opinion, but others have argued already extensively. See the contents of the links sent in this thread.
I wondered where the TDD focus came from in some communities. For ruby you can read about it here:
For another small but good old rant read

TDD- what is this supposed to deliver again? Oh yeah, that's right, code quality.... 
"This code has been tested...." 

The problem is of course that even the most simple program has so many possible states that a 
computer the size of the universe composed of gates the size of atoms which in turn switch states 
at the speed of light couldn't compute all the possible states of that program so that you can say
you've tested it.

So what does TDD do? Why it tests the subset of data that the programmer determines, through 
experience and intuition, are likely to to cause trouble - corner cases, pathological input etc. 

And this is different from what programmers have always done ... how again? In know I should 
know the answer to this b/c TDD priests have been preaching for years now, but I just can't 
remember it. 

The fact is, TDD promises something undeliverable- throughly tested code. What it relies 
on is exactly what it denies the sufficiency of - a programmer's analytic understanding of 
the code and ability to understand, without testing, what a program will do. That understanding 
is just how the data that is unit tested is selected from the universe of data which could be 

But as I said, this is just what programmers have always done. 

Like it or not, the best and ONLY reason programs work as expected is because there's an 
experienced developer sitting there who understands how it works. 

I know there's a level of management that hates to hear that, because it immediately implies
a dependency upon individual developers. TDD found its most sympathetic hearing in the 
corner offices because it promises to increase the interchangeability of developers. A best
 interpretation is TDD attempts to capture best practices of good developers, and a more realistic
 interpretation is it churns out a deaf mockery of those practices and imposes a leaden, mechanical 
and pointless exercise of busy work and wasted time. 

TDD is a kind of false assurance or hand-holding for people who are afraid of their code base.
At some point, corporations will learn that there IS a talent market worth paying a lot of money 
to participate in, but its not at the CEO level- it's at the level of the individual developer. 
Writing code is not flipping burgers and the interchangeable "labor" model that applies
to McDonald's isn't going to fly in IT.


Related, Gary Bernhardt just put up a talk he made at StrangeLoop in 2015 on [possibly dangerous] programmer ideologies, in large on arguments made by some TDD advocates working mainly in dynamic languages vs . arguments made by some proponents of [ML] type systems. As per most DAS stuff, it’s pretty good watch:

Boundaries by Gary Bernhardt - Ruby Conf 2012

I think one issue that isn’t doing TDD any favours is what Kevlin Henney calls “Test First - dumbed down” - training that focuses on Red/Green, totally bypassing “Refactor”!!

Make it work, make it right, make it fast.

Red/Green only pays attention to make it work. But good software needs to go further and that requires skillful refactoring (which goes beyond the automated refactoring facilities included in modern IDEs).

He tweets:

The more I teach TDD, the more I see the Red–Green–Refactor mantra as misleading. It obscures intent and puts wrong emphasis on activities.

suggesting alternative “mantras” like (Deming/Shehart Cycle):

  • Plan: Write test for what you want
  • Do: Make it so!
  • Study: Could anything be improved?
  • Act: Make it so!

or (The 4 R’s):

  • wRite: Create or extend a test for new behaviour - as it’s new the test fails.
  • Reify: Implement so that the test passes
  • Reflect: Is there something in the code or tests that could be improved?
  • Refactor: Make it so!



  • C: Codify intent as test :cat:
  • A: Actualise intent in code :cat:
  • T: Try considering alternatives :cat:
  • S: Select action :cat:


He has more great talks at , good sense of humour also. Thanks!
This I found worth reading also:


Thanks for sharing that. I enjoyed it. I liked the emphasis on value from test results. I also liked that he mentioned LEAN. If you’re interested in LEAN and software development, check out value stream mapping.
It’s a great tool for tuning your velocity.

I have never been too worried about code coverage. I also started programming on the mainframe back in 1990, and we had punch cards. We did not have a test mainframe or even a test partition, so testing meant stubbing out the dangerous bits until you had a good backup plan in place. I have since then done a lot of testing and automation. You always have to balance and adapt the amounts and types of testing based on each project. Test the most common, the most complex, the most mission critical, and the most worrisome scenarios, and you will generally get “good coverage”.:slight_smile:


I still think that Type Driven Design is overall superior to TDD and so forth. By thinking about how the data is represented as it passes through an API is far more important to me than the API itself. Say a function in a module called, oh Html, takes a string and returns a Safe of string (in ocaml parlance) or struct Safe... (in C parlance) is far more descriptive to me than whatever the name is as I can tell pretty well what the function will do based on the types it takes and returns and in fact ‘those’ representations end up driving the API as well as the API becomes transformers from (instanced) types to types only, not a set of calls to ‘do stuff’.

Now of course it is still good to test above that, but most tests are not needed anymore as the type system ‘is’ the test for those.


Especially when done correctly.


@OvermindDL1 You are correct but you are venturing in the static typing land – Rust, Haskell, OCaml, Go etc. Elixir / Erlang only have limited support for enforcing types and a good chunk of it comes in the form of runtime errors and not compile errors.

What I found to be working very well for me in Elixir though, are roughly these:

  1. Make your function signatures picky. Use when and pattern matching as much as possible. This will help you catch a good percentage of the most common bugs. Also absolutely use @spec and Dialyzer! It takes extra time but gosh, the headaches it saves you from. The extra time spent is 100x worth it.
  2. Use mocking but judiciously. Jose described it perfectly: Mocks and explicit contracts. This way you actually make contracts for your internal API which helps when you misspell a function name or a parameter name in keyword list or map. Thinking about contracts also forces you to think harder about dependencies between your modules which is always good.
  3. Use property-based testing. That way you are not falling into the trap of wishful thinking that the classic unit tests are luring you into.
  4. Have integration tests – namely those that simulate full customer workflows, like visiting the home page, clicking a product, adding to cart, then signing in, then going through all checkout steps, then order, then check how do your internal system move the order through its logical states, etc. to infinity.

I know I am am not saying anything revolutionary or new. Strange though how often even seasoned programmers overlook these.

TDD is basically evangelism and as such it must be mostly avoided, in my opinion.

We the programmers forget we are paid to bring additional value. We forget that all too often.


The text you pasted had me laughing and agreeing with it the whole time. Thanks for the share!