Classicist VS Mockist TDD in Elixir

,

Background

For those of you wondering what classical TDD is and Mockist TDD is:

Mainly, classical TDD is focused on testing system boundaries and treats your projects like a black box. It provides for flexible tests that don’t break upon implementation changes.
Mockist TDD, tests the interactions between each component of your app, so it is more coupled to implementation details, but it offers stronger/more precise tests.

TDD in Exlir

I am now trying a new approach to testing apps in Elixir. I have read for the 100th time Mocks and explicit contracts in Elixir and I get the idea of creating interfaces and then having implementations for those. This right here is a classical approach to TDD, if I have ever seen one.

Jose Valim even mentions project boundaries and the need to define those, so I take it that in his mind, all we need to test are the boundaries, which is why we make our applications depend on interfaces (at the boundaries).

This is all fine but…

But I have an issue with this. I know some people love REPL development. I also find it fun - for toy projects. In a real enterprise system, you need a strong suite of tests that can be automatically run. REPL doesn’t cut it.

So, this leaves me with a severe deficiency - how do I test that my system is working internally as expected?
If I only test the boundaries, I have no guarantees.

A very simple example of this is using a cache. If I only test the boundaries, like Jose states, then I will never know if the answer to my query is coming from a cache or from a computation, since to the outside world, both are the same.

It also makes it impossible to test if your processes are being registered correctly, or if your GenServers are doing well instead of just dying, being restarted, and re-running the request with a clean state.

An obvious solution?

Perhaps the obvious solution here would be to just add more boundaries. Boundaries everywhere !
But this is non-sense at best. If we follow this approach we will end up with a contract for every component, because we need to test their interactions. This is crazy.

Questions

  1. So why is Elixir so focused on classical testing?
  2. How do you test the interactions within your system?
  3. If you see yourself as a mockist, which tools to you use? Do you use Mox?

The only thing I mock in my application is HTTP calls to external services, and that’s just so I don’t blow the quota or create lots of junk data when running unit tests. Typically I’ll also have tests that hit them for-real that are infrequently run.

I don’t really understand the purpose of mocks in dynamic languages. I think if you care about the code and you don’t have a good type system then you should run the code.

I take it you don’t use mox :smiley:
Also, I see you are using mocks as a verb and not as a noun. Jose Valim would like to have a word with you :slight_smile:

I’m not a fan of either :slight_smile:

2 Likes

I have been using mox pretty much from when Jose released it, and I had a problem with the “verboseness” of the approach in the beginning, before seeing some actual improvements of the code base - usually when you are replacing one API-connector library with another, or something like that. Since you usually end up introducing a layer between the library and your app, all the code interacting with said library is in one module or modules in the same context, not scattered around your whole code base.

That said, I actually do not use mocks very often, pretty much only for API interactions now that I think about it - and in those cases you can write integration tests that do not use the mock API-connector if you prefer.

I haven’t read the article you linked yet, but I am interested and certainly will when I have some more time - but could you elaborate more on what you mean with

If I only test the boundaries, like Jose states, then I will never know if the answer to my query is coming from a cache or from a computation, since to the outside world, both are the same.

I don’t know what kind of System you work with, but in my experience you can use whatever test framework in whatever language you want for integration tests that simulate real user interaction - because they don’t care about the language your system is written in. What or how exactly would you like to test in your system?

My assertion is transversal to any system. When you only test a system’s boundaries, you only test what’s coming in and what’s coming out, you treat the system like a black box.

This has several issues (like the cache one, how do you know your cache is really being called?) and this is something I don’t see anyone in Elixir discussing. No books, no resources, no nothing. Everyone has their personal workaround, but I found no standard way of doing anything.

The existing frameworks are rather limited and force you to change production code for the sake of testing, which is something I am not a fan of either.

Also, I believe you are confusing integration tests with end to end tests. I am not simulating users here. Imagine this is a backend server API. A very big one. No users clicking around :smiley:

Ideally, I would like to test the interactions between the components of my system in a standard way, but I find the community is way too divided in that regard to provide a standard solution.

If you want to go all in on not using integration tests this might be good resource: https://blog.thecodewhisperer.com/permalink/integrated-tests-are-a-scam

It‘s a quite extreme view on testing though, which afaik even the author no longer holds to the extend presented in the talk.

One important piece here is that you never just use a mock and expect that to suffice. You‘ll always need to test any implementations to conform to expected behaviour as well. If both sides of the contract work in isolation it‘s expected to work together as well. You can have a small set of integration tests to be absolutely sure as well. The big difference to not using mocks is that you slice up the places where you test the parts of your system compared to running each part of it for all of your tests.

I‘m wondering what you‘re missing here. I mean the hard part is having a mock implementation replace the actual one. If you want to test the real interactions between parts you shouldn‘t need to do anything special.

I saw that a few years ago. It made me a true believer of J B Rainsberger :stuck_out_tongue:

If there is a standard way of doing TDD in Elixir, I would love an example. The cache one would be a good starting point.

Dan North has an interesting take on this: Spike and Stabilize.

To minimize opportunity cost:

  • Quickly develop feature (e.g. REPL driven development)
  • Put it into production
    • Delete feature if it isn’t needed
    • Or if the feature proves useful expend the effort to make it resilient by adding tests.

A very simple example of this is using a cache.

A cache is an implementation detail. Functional testing doesn’t care as long as the right value is returned. Caching is tested with non-functional testing - i.e. on average is the value returned within the given time limits.


That link should always go together with this one:
https://blog.thecodewhisperer.com/permalink/clearing-up-the-integrated-tests-scam

I think I have no idea what you are talking about, so I should probably leave it alone, but I’d really like to understand what you mean.

As I said, I pretty much only use mocks to wrap the (sorry if this is the wrong terminology, but I can’t think of a better word here) communication layer to third party applications. So, for a real life example, when writing applications that need to communicate with the Shopify API, I use the shopify package for their REST API and Neuron for GraphQL - but I have wrappers that make it easier to swap the REST API for GraphQL and vice versa. Those wrappers have mocks I use for tests, because the data they take and return should not change (it sometimes does, but then I need to do something anyways).

Another usage is elixir apps communicating with each other, but I treat that the same as the example given - modules for communication and all they do is send and receive data.
Now, if I want to make sure the communication between the apps works, I write higher level tests to test exactly that: do they communicate with each other in the right way.

Again, I feel like I have no idea what you are talking about and I don’t want to tell you “this is the way you do it”, I am just explaining where I am coming from so you can point out what my misunderstanding is, or that I am actually not even talking about the same thing :wink:

I believe you are confusing integration tests with end to end tests.

Got me there, I was pretty sure my terminology was off there, thanks for the correction :smile:

Consumers then, same thing :slight_smile:

I think I am missing an example here.

I have used that methodology in other companies (different name, we called it pipeline development. It was a bad name, but I am not the one who suggested it!) to know that the stabilize part where you refactor and add tests never really comes, or comes too late - usually when your head is on fire.

Interesting view as it may be, I am not a fan :stuck_out_tongue:

Yep. I also read that a long time ago :smiley:

@Ninigi when you define an API and create a mock that obeys it, you are doing 2 things:

  1. defining the boundaries of the system
  2. testing the communication that reaches those boundaries

That’s a good thing to do imo.

I make the distinction because End to End testing (aka, E2E) involving users usually requires a whole different set of tools and methods to be effectively tested, like Selenium and such, which I am not going to touch because those are topics on their own :smiley:

Search for TDD in this forum and read any posts created by me, You should get a good idea.

I’ve taken a bit of time and created an example repo, which includes how I would use Mox in an ideal fashion. There’s nothing used beside plain ExUnit and Mox and it has “unittest” using mocks as well as implementation tests using a global process.

1 Like

Ultimately that is an organizational issue.

Technical debt or in this case testing debt isn’t evil in itself but it has to be skillfully and effectively managed in order to maximize the benefit and minimize the risk.

In many places technical debt just happens and is ignored until it’s too late.