And here’s a fight between Coplien and Uncle Bob. Enjoy (don’t forget beer & chips)!
Sounds like TDD should be used mainly for the critical parts of the application? What’s the community consensus on this?
I really like Michael Hartl’s pragmatic advice on “when to test” from the Rails tutorial, which I think also applies to Elixir/Phoenix. I also think as you get more experience it becomes a lot more clear in what situations to apply TDD vs testing after.
I have not seen consensus, and I think TDD (test before) overall is not a good idea. Without reasoning that is only an opinion, but others have argued already extensively. See the contents of the links sent in this thread.
I wondered where the TDD focus came from in some communities. For ruby you can read about it here: http://codingitwrong.com/2017/01/14/why-rubyists-test.html
For another small but good old rant read http://dlinsin.blogspot.nl/2008/06/nothing-wrong-with-tdd-right.html
TDD- what is this supposed to deliver again? Oh yeah, that’s right, code quality…
“This code has been tested…”
The problem is of course that even the most simple program has so many possible states that a
computer the size of the universe composed of gates the size of atoms which in turn switch states
at the speed of light couldn’t compute all the possible states of that program so that you can say
you’ve tested it.
So what does TDD do? Why it tests the subset of data that the programmer determines, through
experience and intuition, are likely to to cause trouble - corner cases, pathological input etc.
And this is different from what programmers have always done … how again? In know I should
know the answer to this b/c TDD priests have been preaching for years now, but I just can’t
The fact is, TDD promises something undeliverable- throughly tested code. What it relies
on is exactly what it denies the sufficiency of - a programmer’s analytic understanding of
the code and ability to understand, without testing, what a program will do. That understanding
is just how the data that is unit tested is selected from the universe of data which could be
But as I said, this is just what programmers have always done.
Like it or not, the best and ONLY reason programs work as expected is because there’s an
experienced developer sitting there who understands how it works.
I know there’s a level of management that hates to hear that, because it immediately implies
a dependency upon individual developers. TDD found its most sympathetic hearing in the
corner offices because it promises to increase the interchangeability of developers. A best
interpretation is TDD attempts to capture best practices of good developers, and a more realistic
interpretation is it churns out a deaf mockery of those practices and imposes a leaden, mechanical
and pointless exercise of busy work and wasted time.
TDD is a kind of false assurance or hand-holding for people who are afraid of their code base.
At some point, corporations will learn that there IS a talent market worth paying a lot of money
to participate in, but its not at the CEO level- it’s at the level of the individual developer.
Writing code is not flipping burgers and the interchangeable “labor” model that applies
to McDonald’s isn’t going to fly in IT.
Related, Gary Bernhardt just put up a talk he made at StrangeLoop in 2015 on [possibly dangerous] programmer ideologies, in large on arguments made by some TDD advocates working mainly in dynamic languages vs . arguments made by some proponents of [ML] type systems. As per most DAS stuff, it’s pretty good watch: https://www.destroyallsoftware.com/talks/ideology
I think one issue that isn’t doing TDD any favours is what Kevlin Henney calls “Test First - dumbed down” - training that focuses on Red/Green, totally bypassing “Refactor”!!
Red/Green only pays attention to make it work. But good software needs to go further and that requires skillful refactoring (which goes beyond the automated refactoring facilities included in modern IDEs).
The more I teach TDD, the more I see the Red–Green–Refactor mantra as misleading. It obscures intent and puts wrong emphasis on activities.
- Plan: Write test for what you want
- Do: Make it so!
- Study: Could anything be improved?
- Act: Make it so!
or (The 4 R’s):
- wRite: Create or extend a test for new behaviour - as it’s new the test fails.
- Reify: Implement so that the test passes
- Reflect: Is there something in the code or tests that could be improved?
- Refactor: Make it so!
- C: Codify intent as test
- A: Actualise intent in code
- T: Try considering alternatives
- S: Select action
He has more great talks at https://www.destroyallsoftware.com/talks , good sense of humour also. Thanks!
This I found worth reading also: http://www.dalkescientific.com/writings/diary/archive/2009/12/29/problems_with_tdd.html
Thanks for sharing that. I enjoyed it. I liked the emphasis on value from test results. I also liked that he mentioned LEAN. If you’re interested in LEAN and software development, check out value stream mapping.
It’s a great tool for tuning your velocity.
I have never been too worried about code coverage. I also started programming on the mainframe back in 1990, and we had punch cards. We did not have a test mainframe or even a test partition, so testing meant stubbing out the dangerous bits until you had a good backup plan in place. I have since then done a lot of testing and automation. You always have to balance and adapt the amounts and types of testing based on each project. Test the most common, the most complex, the most mission critical, and the most worrisome scenarios, and you will generally get “good coverage”.
I still think that Type Driven Design is overall superior to TDD and so forth. By thinking about how the data is represented as it passes through an API is far more important to me than the API itself. Say a function in a module called, oh
Html, takes a string and returns a
Safe of string (in ocaml parlance) or
struct Safe... (in C parlance) is far more descriptive to me than whatever the name is as I can tell pretty well what the function will do based on the types it takes and returns and in fact ‘those’ representations end up driving the API as well as the API becomes transformers from (instanced) types to types only, not a set of calls to ‘do stuff’.
Now of course it is still good to test above that, but most tests are not needed anymore as the type system ‘is’ the test for those.
Especially when done correctly.
@OvermindDL1 You are correct but you are venturing in the static typing land – Rust, Haskell, OCaml, Go etc. Elixir / Erlang only have limited support for enforcing types and a good chunk of it comes in the form of runtime errors and not compile errors.
What I found to be working very well for me in Elixir though, are roughly these:
- Make your function signatures picky. Use
whenand pattern matching as much as possible. This will help you catch a good percentage of the most common bugs. Also absolutely use
@specand Dialyzer! It takes extra time but gosh, the headaches it saves you from. The extra time spent is 100x worth it.
- Use mocking but judiciously. Jose described it perfectly: Mocks and explicit contracts. This way you actually make contracts for your internal API which helps when you misspell a function name or a parameter name in keyword list or map. Thinking about contracts also forces you to think harder about dependencies between your modules which is always good.
- Use property-based testing. That way you are not falling into the trap of wishful thinking that the classic unit tests are luring you into.
- Have integration tests – namely those that simulate full customer workflows, like visiting the home page, clicking a product, adding to cart, then signing in, then going through all checkout steps, then order, then check how do your internal system move the order through its logical states, etc. to infinity.
I know I am am not saying anything revolutionary or new. Strange though how often even seasoned programmers overlook these.
TDD is basically evangelism and as such it must be mostly avoided, in my opinion.
We the programmers forget we are paid to bring additional value. We forget that all too often.
The text you pasted had me laughing and agreeing with it the whole time. Thanks for the share!
I would not so sure about GO as is does not have generics.
In mu opinion Go has bad type system.
Check this post
DevTernity 2017: Ian Cooper - TDD, Where Did It All Go Wrong
I find this talk isn’t defending TDD/BDD as it is commonly practiced. Yet it goes back to “the sources” (Test Driven Development: By Example (2002), Refactoring: Improving the Design of Existing Code (1999), Refactoring to Patterns (2005)) to discover the actual intent behind the original practices. In my opinion it ends up in a place aligned with Mocks and explicit contracts.
TLDNR: Test the behaviour of the “public API” - not the implementation details
It concludes with:
- The reason to test is a new behaviour, not a method on a class
- Write dirty code to get green, then refactor
- No new tests for refactored internals and privates (methods, classes)
- Both Develop and Accept against tests written on a port
- Add Integration test for coverage of ports to adapters
- Add system tests for end-to-end confidence
- Don’t mock internals, privates, or adapters
He does emphasize that when tests are used to discover a suitable implementation, those tests will have to be deleted in the end as they are coupled to the implementation details .
He uses the term Duct Tape Programmer quite a bit.
He also references the Fowler, Beck, DHH conversations on Is TDD Dead?.
That would be me in a job interview where I was asked to do TDD in a team of “software craftsmen” (credits:
Yes. I find that even in C++, with care, one can use the type system to eliminate most of the trivial bugs. And of course languages like Scala& Rust (Haskell, OCaml etc) are designed to force that rather than merely allow it.
Coplien’s points are interesting.
#1 I think is the strongest point and is definitely what bothers me the most–TDD absolutely focuses on minutiae, from beginning to end. I remember having the corporate “agile coach” visit our team for a day of training. We were presented a problem, toy in scope but with enough subtleties to make it interesting, and of course I immediately started thinking about data representation & API. After we worked on it a bit, TDD was introduced, we were instructed to throw out our work so far and start writing tests, then make them pass. At the end of this the instructor crowed “now, see how different your data structure is”. Well, no, actually, mine wasn’t–a good data design was still good
If you have a framework designed for a specific domain, and you have a problem in that domain, and the framework is good and fits your problem well, then all you have left to worry about is minutiae. I think that describes a lot of classic RoR applications back in the days before SPA and real-time distributed updates were things. So I think that specific applicability contributed a whole lot to the popularity of TDD.
The problem is, training people in that focus leaves them unable to recognize where the big picture of the framework no longer fits well with the big-picture needs of an app. This is a huge blindspot for developers, leading them to struggle against the design of their tools.
Having said all that, I did find that some training on TDD gave me a new perspective that is useful: thinking about testability when I’m thinking about data types and APIs. It’s not really a different set of criteria, but thinking about it sometimes helps me find an opportunity for decoupling or decomposition which I had missed.
#2 is right, and design by contract is something that really should be taught as part of TDD, but sadly seems not to be–I suspect some of us would object to TDD less if it taught working out the contracts first rather than jumping right into tests aimed scattershot at whatever minutiae of the API we happen to come up with before we really think through the API (or, put another way, Liskov nailed it before most TDD proponents were born and people ought to still be reading Abstraction and Specification in Program Development).
#3 as stated is wrong for 2 reasons, yet still correct in spirit. First TDD does find and correct errors in the application code, second most test errors (in my experience) simply fail to find application errors but not introduce new ones. So it does not double the errors in your running code. However, the point still stands that programmers will commit errors in test code, and this will necessarily limit the effectiveness of TDD to be less than what its evangelists claim.
I’m loving having an IDE that runs dialyzer in the background and provides me with near real-time error listings. THAT makes it painless to start using @spec!
(VSCode…, but there are plugins for other editors that do it as well…)
12 posts were split into a PM: Off-topic posts from TDD thread
Each of us needs to assess how best to spend our time in order to maximize our results, both in quantity and quality. If people think that spending fifty percent of their time writing tests maximizes their results—okay for them. I’m sure that’s not true for me—I’d rather spend that time thinking about my problem. I’m certain that, for me, this produces better solutions, with fewer defects, than any other use of my time. A bad design with a complete test suite is still a bad design.
Notice that Rich Hickey seems to be speaking as if he’s working solo.
When working solo or with a very small team, perhaps an extensive suite of tests is less crucial than when you’re working on a very large codebase with several teams and many programmers, where no individual knows more than a relatively small part of the codebase.
Having tests might also be useful when you’re working on a project where the programmer turnover is high, where the tests can function as a sort of documentation safety new for new programmers.