Struggling with the Mock noun


I don’t want this to come across as rebuttal to Jose Valim’s assertion on not mocking as a verb, I’m just struggling with it.

The problem I have is coverage.

All the examples I’ve seen, using a Mock agent with meeting an @behavior contract basically are one-test scenarios, covering one use case. Once the behavior of the mock is written, in order to change the output of the Mock, it would seem that the proposal is to create a Mock object for each test case, that just seems… kind of batshit.

For instance if I have a Mock http client, and I want to test how code responds to the 200 response code, 400 response code, and 500 response code. My mock http client would have one .get() method and it would always return whatever I define as the return result, what exactly is the proposal to test a .get() consumer to validate that the consumer responds to the various conditions correctly?

I’m new to Elixir, coming from C#, NodeJs and Golang… So all of my testing experience has used mocks, and being told by the language creator it’s wrong is just really tough for me to get my head around how to inject fixed values so that the unit under test behaves how I want it to under specific circumstances. Being new to functional programming is the source of my confusion, but some good examples where Jose’s mocking technique that covers multiple behaviors for a single function would help me.

High level testing in isolation

The verb vs noun is about not mocking the API you are calling but instead passing the thing you want to mock. Imagine that you have a module Caller that invokes Callee.my_function/1:

defmodule Caller do
  def some_fun(arg) do

Instead of doing something such as mock(Callee, :my_function, 1), you want to pass Callee as argument:

defmodule Caller do
  def some_fun(arg, callee) do

Dependency injection and all that jazz. From what I heard that’s pretty much how testing goes on C# too. There are probably libraries that help with the creation of the “callee” during tests in Elixir and it is fine to automatize creation of mocks, it is just not fine to change Callee on the fly.

The use of behaviours is to make explicit to your and your team mates which parts of the external behaviour you rely on. We want to minimize dependencies on external code and a good way to ensure it will grow healthy is by making it all explicit.


I’ve used dependency injection in one of my recent project (written in NodeJS though) and DI makes testing fun and easy. It required a little of extra work, but I think it would pay off in a longer perspective.

Do you know about any open source Elixir projects which are using this pattern?


I think dependency injection as its used in traditional oop languages doesn’t map directly to elixir, at least not in the conventional ways. In an oop language a class can be passed dependencies during object construction and hold on to them, so that all of the public methods don’t need to have +1 arity for each dependency (unless you want them to). The creator rather the caller gets to determine the concrete implementation of the dependency. This affords you the convenience of not having to pass a dependency to each one of your methods (no Twitter.get_tweet(id, http_client), Twitter.get_user(id, http_client), Twitter.get_recent(id, http_client), etc) , but you are still able to make the decision about the specific implementation at runtime.

If you couple this with a DI container you are able to have all the dependencies wired up in a compositional root on app init. This give you a single place in the code where you can determine all of the concrete implementations and you don’t have to worry about choosing the dependencies at the call site or resolving nested dependencies (for example: if you want to use Foo, and Foo depends on Bar, and Bar depends on Baz, you don’t need to do foo = new Foo(new Bar(new Baz))), you can just do container.get(Foo) and Foo will be created using whatever implementations for Bar and Baz that are defined in the container).

I think that containers and dependency injection go hand-in-hand in oop languages. I find the container approach nice, and if used properly you can have a very decoupled a modular system that is easy to work with, but I don’t think the ways in which it’s currently utilized in Java, C#, php, etc, can be applied to elixir, at least not without giving up some convenience. Dependency injection as a concept is certainly doable, but I feel like you would end up carrying around a lot of the dependencies with you.


Well even consider this in OCaml, first class modules allow you to ‘refine’ modules, that is how, for example, the Map module works, you refine it to a type by doing Map.Make(String) or whatever key type you want, and it returns another module that you can then give a name if you want module StringMap = Map.Make(String). This makes testing here very easy as well, while remaining fully immutable and functional.

Erlang/Elixir has such a feature too via tuple calls, you can do something like:

─➤  iex                                                                                                                        1 ↵
Erlang/OTP 20 [erts-9.0] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:10] [hipe] [kernel-poll:false]

Interactive Elixir (1.6.0-dev) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> defmodule MyThing do
...(1)>   def new(callback \\ fn x -> x end), do: {__MODULE__, callback}
...(1)>   def doSomething(x, {__MODULE__, callback}), do: callback.(x)
...(1)> end
{:module, MyThing,
 <<70, 79, 82, 49, 0, 0, 4, 236, 66, 69, 65, 77, 65, 116, 85, 56, 0, 0, 0, 110,
   0, 0, 0, 10, 14, 69, 108, 105, 120, 105, 114, 46, 77, 121, 84, 104, 105, 110,
   103, 8, 95, 95, 105, 110, 102, 111, 95, ...>>, {:doSomething, 2}}
iex(2)> normal_thing =
{MyThing, #Function<0.64568805/1 in>}
iex(3)> normal_thing.doSomething(42)
iex(4)> logged_thing =, label: :MyThingLog))
{MyThing, #Function<6.99386804/1 in :erl_eval.expr/5>}
iex(5)> logged_thing.doSomething(42)
MyThingLog: 42

/me really really REALLY thinks that tuple-calls are way way way under-appreciated, they are the BEAM’s version of first-class modules and open up such a whole range of immutable and functional patterns that are extremely difficult to very wordy to do otherwise


Yeah, but parameterized modules have been officially deprecated :frowning: And tuplecalls were just an implementation detail of them, which may vanish anytime…


Parameterized modules were stupid, it is good they are gone.

As I recall during the announcement of the removal of parameterized modules was that these were going to stay and be supported, has that changed in the last 10 years? They really should stay, they are so so very useful.


I never read that announcement, just remembering a discussion here partially :wink:

If they said tuple-calls will persist, I’m fine with them. Also we all know, that OTP only rarely introduced such massively breaking changes…


Elixir itself is planning to detect tuple calls (which would add an additional slight overhead on every-single-function-call) in Elixir 2.0. They’ve also been trying to push the idea to OTP to disable tuple calls unless a module annotation exists to enable them ‘just’ on a specific module (which bugs the crap out of me as then you’d have to enable it about everywhere when using them, that just seems hateful to those that like the syntax…).

(yes yes I know about people calling things like ok tuples and getting weird errors, but that is only because Elixir’s map calling syntax uses dot instead of something better like just allowing bracket-calls on maps/structs and so forth, or using the OCaml’y # like someMap#someField or so, or something else)


Which is going to be removed on Erlang 21.


Blah, removing such a useful feature. First-class modules are gone with no way to replace their functionality (back to passing witnesses everywhere…). >.<


Jose - Thank you so much for your informative response. I’ve implemented tests as you suggest, and I am definitely seeing what you’re talking about. This is incredibly powerful. I’m sold! Thank you very much for guiding us in the right direction! :heart:

I use @service_name pointed at a config value for injection and then return fake responses from the Stub with pattern matching. It works beautifully!


drapermd, it works beautifully when you have simple tests, what about testing multiple components at once and the stub you want to use has to be passed to the component two levels below your SUT? Will you have to pass dependencies around through every function? If so, that clutters all function signatures, unfortunately.

Is there any elegant way of accessing (injecting) dependencies?

Having single stub instance defined at the config level is not convenient, because then the only way to have different behavior for different test cases running in parallel is pattern matching on function arguments, which might not be enough. For example, how to do that when function with no argument supposed to return different values based on the test case?



At Plataformatec we have recently solved a similar issue by using the Registry. In this case, we were using bypass but the URL configuration was set deep inside another component and surfacing that up would require a major refactoring that could not be done at the moment.

If you are not familiar with bypass, it creates a tiny server on demand per port. We used the registry to associate ports to test processes and read those when necessary. The first step was to change the code that returns the url to the external service to allow anonymous funs. Before it was:

def service_url do
  Application.fetch_env!(:my_app, :service_url)

And now it is:

def service_url do
  case Application.fetch_env!(:my_app, :service_url) do
    string when is_binary(string) -> string
    fun when is_function(fun) -> fun.()

Now in our test helper we start the registry:

Registry.start_link :unique, BypassRegistry

And changed the service url to be an anonymous function that does a registry lookup and then returns the url:

Application.put_env(:my_app, :service_url, fn ->
  case Registry.keys(BypassRegistry, self()) do
    [] -> raise "no bypass port registered for #{inspect self()}"
    [port] -> "localhost:#{port}"
    [_ | _] ->  raise "multiple ports registered for #{inspect self()}"

Since all of the tests run on top of the function above, you can use async: true. The last change is to wrap so it uses an unassigned port:

def bypass_open(opts \\ []) do[port: bypass_port()] ++ opts)

defp bypass_port() do
  port = Enum.random 3000..65000
  case Registry.register(BypassRegistry, port, :this_value_is_not_used) do
    {:ok, _} -> port
    {:error, {:already_registered, _}} -> bypass_port()

By using the registry, when the test process dies, the associated port-process entry is automatically removed and we are free to re-use it in the next test. This is very similar to how the Ecto sandbox works.

Even if you are not doing resource allocation in your tests, you can think of a similar mechanism, where the module you are going to invoke is read from the registry.


josevalim, Registry with anon functions looks interesting, I will give it a try. The only thing is not nice here, imho, is that production code has to contain logic related to test, I mean anon function invocation. Thank you for your response.


Here is a quick snippet of a mock library that could be built on those principles:

It is very raw but the core ideas are there:

  1. You can only create mocks based on behaviours (no ad-hoc mocks)
  2. Avoid dynamic generation of modules during tests (which is expensive). You do it sparingly, typically in your test helper
  3. You retain async: true since everything is process based

There are some missing pieces. For example, we don’t verify expectations and we don’t allow a maximum/minimum count. Those should be trivial to implement unless you want auto-verification. Auto-verification requires on_exit callbacks and those happen after the test process is dead, which means the registry data is lost.


I found this library: which looks interesting.

It does not override existing modules, but dynamically generates mocks on the fly and it also verifies mocks against callbacks defined in behaviours similar to the approach in @josevalim’s gist .


@madshargreave it does define modules dynamically though, which breaks rule 2 above.


Hi everyone,

I have pushed a very tiny mock library called Mox to GitHub + Hex:

It is based on the gist and the guidelines I have shared above. Please give it a try and let me know of any feedback you may have.


A nice feature to copy from syringe is to allow the mock to be accessed from multiple processes:

defmodule MyServerTest do
  use ExUnit.Case, async: true
  use Mocker

  test "should outsource work to MyWork module in the GenServer process" do
    {:ok, pid} = MyServer.start_link

    # now that we're operating on a different pid we need to notify the
    # mocker to work within that pid
    mock(MyWork, pid)
    # now you can intercept the functions as before
    intercept(MyWork, :handle_work, [0], fn(_) -> 100 end)

    assert MyServer.increment() == 100
    assert was_called(MyWork, :handle_work, [0]) == once #truthy