How to do TDD correctly?

Background

After asking for help in the Elixir community about tools to do TDD, I was presented with a conference video from a community member I have great respect for.

TDD - Where did it all go wrong

The title is catchy, but be not afraid - this talk actually defends TDD.
In it, the author pin points the major problems with TDD and then searches for solutions for those problems.

He also explains how people are doing TDD wrong and how they can improve it.

Looks nice, why are you confused then?

However, one of the things I took notice is that he mentions that in TDD, The test unit is the feature, the thing you should test is the feature.

Some may know this better as Test the API, not the implementation. You may create tests for the implementation, but delete them after.

Looks straightforward, right?

Wrong. If I take the feature ( or behaviour or what have you ) as a Unit I want to test - instead of a module or a function or whatever it is you defined to be a logical unit in your code - then what I actually end up is with a suite of Integration tests.

See it this way: if all you test is the public API of your webpage, then all you are really doing is a bunch of integration tests that traverse your entire system given a query ( or worse if you mix queries with commands, but that’s another story ).

Integration tests are a scam and in fact most people seem to say that a common problem is that fact we have too many of them and too few unit tests.

But if a Unit test is only testing the end result of everything, then it is testing the whole system. This is mindblowingly confusing to me.

Question

  • How do you do TDD right then?
  • If modules, classes and functions are not units, what is?
  • How to not fall into the dark abyss of the inverted pyramid of testing?

This may be of interest in the discussion (along with the rest of that thread): Tests that rely on private methods

1 Like

For me it is about creating reliable code. Unit tests are one way of doing them.

I agree with you that testing features is not unit testing (or at least in most cases not). A feature is usually quite complex and involves many units. For me TDD in functional programming is easy.

The unit is a function. Which should in most cases be referential transparent (pure). A function has an input and and output and because it is pure it is easy to test.

For TDD I write a test for the function, write the implementation of the function and when it passes I am done.

I don’t care if this is testing implementation or not. In most cases in fact I am testing implementation. What I am not doing however is saving every test. I quite often remove them or refactor them into tests which tests the API or public functions.

So TDD for me is a temporary tool to make sure a function does what it is supposed to do. I don’t test all functions because, being functional programming most of them are trivial and can be swallowed by integration testing. (I should say I don’t agree with elixir’s way of preventing testing of private functions but in practice it doesn’t matter much).

On the other hand I don’t agree that integration tests are a scam but I believe if you have to mock or come up with too much workarounds to get your integration tests to work it is a sign your code is not properly designed.

Someone in another thread suggested making erlang’s common test application into elixir and this would be great. Common test shines at integration testing or testing things from the “outside”.

2 Likes

I think both of your linked resources (iirc them correctly) are actually of the same opinion about that. Software provides a public interface to stuff you can do with it. That interface must work, therefore it needs to be tested. How it does that is a implementation detail.

But as you and J B Rainsberger said lot’s of integration tests over the whole system don’t scale very well. That’s where you split up your project. What if for example you don’t just think about the public interface in the form of a json api? You could “add” another interface for your core domain layer in addition to the json api layer. Now with that additional interface the web api could just work of of stubbed data, while the core domain interface works with the real db. Suddenly the depth of the web api tests is kinda halfed and the core is tested separately. There would be tests that the web api asks the correct way for data via that core domain interface and there would be tests that the core domain layer does respond with the correct information. To test that both actually can work with each other there would be a few integration tests, but the bulk of tests stay within the boundries of either the core domain layer or the web layer. Now this can be repeated like a russian doll.

The tricky bit imho is finding the correct granularity of how much/where to split up to not hurt yourself with slow tests, but also not have so many “public interfaces” to maintain that you cannot comfortably change anything in your project anymore.

1 Like

Interesting trick and discussion. Thanks !

I used to do and think the same!
What’s confusing to me is that the author of the conference explicitly goes against this. So, what am I supposed to test?

The name of the talk is polemic on purpose. Yet I strongly advise you to check it out, it is by far one of the best I have seen. Integration tests are not evil, you definitely can have some and they will be great, but they ought to be used with care thanks to their cost in maintenance. Once you check the talk everything will make sense but I overall agree with your point of view.


Would really love some input from @peerreynders on this topic :stuck_out_tongue:

Integration tests are a scam and in fact most people seem to say that a common problem is that fact we have too many of them and too few unit tests.

First of all J. B. Rainsberger has become a lot less pugnacious about the issue recently which I think comes through in

J. B. Rainsberger - The Well-Balanced Programmer

where he arrives at his own version of Alistair Cockburn’s Hexagonal Architecture (and Kevlin Henney has some words on dependency inversion).

My interpretation of his discussion about his Universal Architecture is:

  • Tests staying entirely in the “Happy Zone” (HZ) aren’t integration tests.
  • Only interactions with the “Horrible Outside World” (HOW) and the “DMZ” are mocked.
  • By design HZ -> DMZ dependencies are to be avoided and replaced with (HZ -> interface) <- DMZ
  • By design HZ -> HOW dependencies are to be avoided and replaced with (HZ -> interface) <- DMZ Adapter -> HOW. (A.K.A. Narrowing API/Pattern of Usage API.)
  • Integration tests are necessary for:
    • HZ -> DMZ (avoid at all cost)
    • HZ -> HOW (avoid at all cost)
    • DMZ -> HOW
    • DMZ Adapter -> HOW
  • By design you want the DMZ and the “Narrowing API/Pattern of Usage API” to be as small/narrow as possible.

i.e. effective testing is about dependency management which is a design activity.

Gary Bernhardt’s FauxO (a play on OO) in Boundaries also relates to this. Given the functional core, imperative shell partitioning, any tests only running any functional core code, no matter how much code that may be, isn’t considered an integration test.

Tests aren’t an end in themselves - the way I measure their value is whether or not they enable “(fearless) refactoring‡ with confidence” while at the same time not getting in the way of refactoring.

  • A test against a published interface (IEEE Software: Public versus Published Interfaces) ensures that I’m not breaking things while I’m refactoring and it’s not going to get in the way because a published interface shouldn’t change anyway.
  • A test concerned with implementation details is going to get in the way of refactoring when refactoring is changing the implementation (implementation isn’t the same as behaviour).

For me the core value has always been to design a system to be testable so that I can always refactor with confidence - i.e. testability (for the purpose of refactoring) being a core value of good design. While tests can help you discover a design that is easier to test, that doesn’t mean that testing will inevitably lead to a good design.

‡to reduce volatility in the marginal cost of features.

6 Likes

Interesting reading as usual !

At peace again

However I think the follow blog post put my mind at ease again. I now think I understand where all the confusion comes from.

Which team do you belong to?

Detroit-school TDD is the classical one, created by Kent Beck and others in the 90s. This type of TDD tries to maximize the regression safety net introduced by tests, and does that by minimizing the use of test doubles. However, this will inevitably lead to redundant coverage (and all the problems that come with it). Also, the design feedback is weaker when practicing this type of TDD.

In the London-school TDD, we isolate all the dependencies and focus only on the subject under test. For this reason, followers of this school usually think that these are the TRUE unit tests, since you’re only exercising the code that’s under test. This type of TDD usually yields a higher design feedback, but they have to be complemented with integration tests, to ensure everything is glued together as it should.

Turns out I have been practicing London style TDD instead of Detroit style TDD. I have been advocating for a London style TDD form the start without knowing. Now all makes sense.

Is Detroit style TDD bad?

I don’t know. I am not well informed enough to make an opinion, but perhaps some of you can share your opinion?

Share articles and conferences about it for your fellow programmers!