I have seen some good examples for best practices in sharing test suites:
Test a module for a given behaviour
Test a behaviour that uses super
How to test a behaviour
reusing a test suite
I am still curious on what’s the best practice for swapping behaviour implementations through config while allowing tests to run concurrently. Primarily from a library perspective. I have managed to come right using Mox (option 2 below), but I can’t shake the feeling that there are better ways to do it. I would love thoughts and input.
Example
Let’s say I have an adapter module
defmodule MyApp.Adapter do
@callback action_1() :: :ok | :error
@callback action_2() :: :ok | :error
@callback action_3() :: :ok | :error
def action_1, do: impl.action_1
def action_2, do: impl.action_2
def action_3, do: impl.action_3
def impl, do: Application.get_env(:my_app, :adapter, AdapterImpl)
end
for completeness it can be used by something like this
defmodule MyApp.AdapterCaller do
alias MyApp.Adapter
def call_adapter("action_1"), do: Adapter.action_1
def call_adapter("action_2"), do: Adapter.action_2
def call_adapter("action_3"), do: Adapter.action_3
end
Lets say 3 adapters implement the behaviour namely AdapterOne, AdapterTwo, AdapterThree
I have referenced quite a few code bases and gone through ExUnit and Mox docs, but haven’t felt “Yeah this is the way to lock in”.
It is pretty simple to setup generic tests that can be used for all implementations (especially following the above links), but using the application config means that I can’t run them in parallel for no reason other than setup and I want that concurrency.
In trying to figure out which way I want to go, I have explored the following ways to manage it:
1. Set the config during setup
setup do
Application.put_env(:my_app, :adapter, AdapterOne)
end
In terms of being able to test, this works fine, but because of the shared state of the config, I can’t run them in parallel because it would swap adapters mid test if I’m unlucky
2. Use Mox and create a Mock Adapter, Set the Env and Mock returns.
in test_helper.exs
Mox.defmock(MockAdapter, for: MyApp.Adapter)
Application.put_env(:my_app, :adapter, MockAdapter)
and in the test
Mox.expect(MockAdapter, :action_1, fn -> AdapterOne.action_1 end)
This is my preferred approach because of the parameter checking and other assertions I get for free, but I’m not married to it.
It solves the shared configuration issue, and it looks like I can avoid conflicts with its allowances and some setup stuff I will play with after I posted that.
Is this generally what is recommended?
3. Put Env in TestCases
I avoid this because it’s basically the same as 1. but catches you when you least expect it
4. Change code to allow dependency injection.
If I change my adapter implementation to
defmodule MyApp.AdapterCaller do
alias MyApp.Adapter
def call_adapter("action_1", adapter \\AdapterOne), do: adapter.action_1
def call_adapter("action_2", adapter \\AdapterOne), do: adapter.action_1
def call_adapter("action_3", adapter \\AdapterOne), do: adapter.action_1
end
It becomes simple to test concurrently because I’m passing the adapter I’m testing. There are occasions where this is my goto approach for general design and without defaults, but it’s incredibly situational.
- If the behaviour is nested deep in your code base, you have a cascading expansion of contract changes but also any caller needs to manage an instance that should be a config.
- Designing code to be testable is good, but changing the design of code in order to test is bad.
This approach is in my pocket but only appropriate <40% of the time.
5. Do unnecessary things with a mock instance
Definitely don’t do this, but listing as it’s a thought that crossed my mind.
if you have a mock instance created somewhere
Application.put_env(:my_app, :adapter, MockAdapter)
You can create a set of parameters and configs that ensures the correct instance is returned based on the test.
While possible this quickly becomes an unmaintanable mess and should go no further than a thought experiment.
6. Mix test setup and params.
An option I think is valid but I have not explored it properly, but I think you can setup suites and run sets of tests in isolation.
When do people usually take this approach? Is it more for performance than isolating collisions?
7. Partitions coming in 1.18
Having read through it, this seems like another way you could manage setting environments without conflict
Changelog
Would love to know thoughts or better approaches!