In some application I have tests that start a mock of two HTTP backends. I like it because my application executes the actual HTTP request in the tests. But it is slower than just mocking the interface module of course.
I made it so that those HTTP servers store the incoming requests in a GenServer, and then from the tests I can send a predicate+response function to that server, that will also be stored. The server will match predicates and requests and emit a response when a predicate matches. This is cool because thanks to the storage you can send the match predicate before or after the actual request is emitted, you do not have to mock before. It makes a better experience when you write tests “in order” like this:
- bla bla bla execute an action in the app
- the app calls the HTTP backend
- bla bla bla assert that the backend receives a request with some data, and reply with that data
Now, as I said, it is slow. I do not have that much tests with that so I do not care, but If you will have many tests, you could just mock the backend at the TCP level in a test and ensure that your application sends the right packet format/layout. And then in other tests just mock the module that is an interface to the backend.
Also, should I be thinking of packaging this fake implementation as a separate OTP application in an umbrella setup that I can disable in releases for production? Or would it be better to keep this a separate elixir application altogether?
For us it is just two modules in test/support
and the elixirc_paths
project config defines that path in test env (this is the default when you create a Phoenix application). If not in phoenix you just add that in your mix.exs:
def project do
[
# ...
elixirc_paths: elixirc_paths(Mix.env()),
# ...
# ...
defp elixirc_paths(:test), do: ["lib", "test/support"]
defp elixirc_paths(_cenv), do: ["lib"]
You can then start your server(s) from the setup
of a test.
I do not use start_supervised!
for those though, I’d rather just use MockModule.start_link
because I want the test to crash if there is an error in the mock. Except if you want to simulate a crash of the TCP backend of course, to implement network error handling in your app.
I suggest to develop the mock using TDD to ; we have some files called ..._meta_test.exs
where we test the mock itself.
I would not suggest to make a separate app or a library for that kind of mock. The mock will be highly specific, you need to be able to tune it to make it easier to test. For instance, in our test we do not have a concept for conn
or headers, or even http statuses. We send request structs, some layer in the application code, an http client, transform those strucs in http requests and sends them, and do the reverse for responses. The mock decodes the http requests back to structs, and we receive those strucs in the predicates:
be_expect(ctx, GetCapability, fn
%GetCapability{user_id: ^user_id} = msg ->
assert msg.capability == Capabilities.some_action()
CapabilityReport.new(
capability: Capabilities.some_action(),
user_id: user_id,
satisfied: false,
foo: :bar,
)
%GetCapability{} ->
false
end)
(In the current implementation the predicate and the response generator are the same function, if the return value is truthy then the predicate matches, and that return value is the response.)
We want that kind of convenience in the tests because we have structs for all requests/response. So I would not make that a library, the mock needs to know how to create those structs from requests and how to produce an http response from them. If you do not decouple you can have an implementation tailored to your needs. An independent application for that will be very abstract, basically it will require a lot of “hooks” to encode, decode, generate, etc. your TCP messages. In the end it will only provide the TCP layer itself, and well there are already very good libraries for that, starting from the standard Erlang library itself.