Http client e2e testing

I have a project where we do http requests to public domains.

I am currently using Mox for testing and following their practices by having a callback for the http client and a implementation:

defmodule HttpClient do
  @callback request(atom(), binary(), binary(), keyword(), keyword()) ::
              {:ok, HttpClient.Response.t() | HttpClient.MaybeRedirect.t()}
              | {:error, any()}

While it works great for tests with mocked data, I still find that these tests are double edged, as they are fully dependent on the correctness of data and format of it, we already had cases where the tests would pass and the application would fail at runtime.

I was wondering if maybe replacing this kind of mocked tests with a real local http server (similarly to how ecto uses sandbox) that exhibits real properties would be better and make for cleaner and easier to manage tests.

Any thoughts on this or tools you could recommend?


I would recommend Patch for everybody. It is universal and more friendly solution.

This is caused by different data in tests and in reality. You can just copy-paste data from real responses and use it on testing. Or you can even use solutions like ExVCR

Anyway, final decision depends on amount of time you have and what quality of tests you want


I don’t care that much, whatever gets the job done.

This is what we are doing, however I think it is highly unproductive and error prone in the long run, the tests are brittle, the only thing they have going is the speed as it is not doing any IO.

OK, now this is something I was looking for! We are dealing with a wide variety of possible responses, so having the ability to collect them with time is exactly what we needed! I was thinking at some point to implement a similar custom solution, great that I don’t have to anymore.

Thanks, I think we will end up using ExVCR in the future for local tests.

As for E2E tests, I think that a custom server will be in order, as we have more things like tls and certificates checking in play.

Beware: VCR solutions are hard to maintain

1 Like

How so? I am mostly interested in custom cassettes option, that ideally would be recorded manually.

I guess if this gets nasty it could be as well replaced with a mix of a sqlite database + some utilities to fetch and record the http responses.

  • Cassettes can contain outdated data: some services have TTLs for their responses. Or something like your ID in the third-party service can change and responses in cassettes can become stale. A lot of other stuff too
  • They’re not easy to rerecord correctly. Because manual changes to the cassettes have to be reapplied every time you rerecord them

Good heads up, this can indeed can become a problem!

Now this sucks. I still like the idea of recording the responses in some kind of DB, as writing them in a file or source code manually is just not very manageable. I guess more research is in order, as this does not seem to be a trivial task.

I get it that it might not be scalable to manually collect all of the API responses but I have used ExVCR in the past and ended up… manually collecting all the API responses that I needed for tests. The API had slight changes – 3 times in a single year – but it was enough to piss us off because we shipped non-working features in production due to outdated cassettes.

So we rolled up our sleeves and what do you know, something like 70 API responses took 3 people 5-6 work days (and we were not doing only that, it was just an ongoing effort). Not a huge deal, though granted it’s annoying to do.

So I am with @hst337 here – take control of this.

Of course nobody is stopping you scripting this somewhat. I was able to devise a small text file format a while ago (next time I needed something like this) and just have a bash/zsh script loop over the lines inside the file and do curl requests and record the responses. Took me 80% the way there (though it was super specific and was not generalizable).


Indeed, from the looks of it, this library is very specific, I would guess it would come in handy for testing microservices, where you could host locally all the instances and record their contracts.

I am thinking that using something like a sqlite database to record all the responses should be times more manageable, both in terms of debugging and editing, on elixir side you have ecto and you can edit/view it manually with any sql client. Not a bad idea for a possible future library, as this can be extended beyond http.

Yep, you can very easily end up writing the next ExVCR that way. I’ve known a bunch of programmers who would have done it but couldn’t be bothered beyond doing the task at hand at the time – count me as part of that group. :slight_smile:

1 Like

Good question. Here are my 2c:

I’ve seen that VCR and the likes “feel” great but as folks here pointed out, they get outdated. It’s a picture/snapshot in time saying that things may have worked given a certain set up you had. A green VCR test doesn’t give me confidence, unfortunately. Much like a test that relies only on Mox/unit tests.

Bypass works well, but I personally feel like the tests are too complex. That’s just my preference, I’m a simple person :slight_smile: Even when the tests are done correctly, I don’t have good confidence from a successful test run.

I’ve seen projects that leverage their own http server and I personally dislike those. It’s a layer of complexity that I haven’t seen yield good returns in practice, but that may be biased to my experience (as all of the points in this answer by the way).

What gives me the most confidence is having integration tests that exercise the live implementation and you need to load the correct keys and such, just like Dashbit discusses here:

  • set boundaries in your code so you can leverage Mox (do what the readme says)
  • create a test where you load real envs and the live impl is chosen.
  • set things up so these are excluded from ci runs, as they are meant to be run locally with the correct env vars/keys. You can use @moduletag or multiple @tags. So, ci runs will just exercise the boundary and args (your Mox unit tests).
  • every once in a while run those from your local machine. You can set up mix aliases to help here. Depending on the service/whatever you are testing maybe you pay for calls/only have a certain number. In certain projects, before merging a PR (after reviews, dev, etc), I’d run these locally to gain confidence.

This setup has given me (and a few teams) the most confidence when working with things we don’t fully control (when we should use Mox). I’ve personally seen:

  • the feedback loop be drastically decreased
  • caught errors in docs → docs say something, service does something else. You can now prove it and not be seen as a maniac on your team, questioning your sanity.
  • added documentation → since you can assert on results, this is very helpful to see outputs and help onboard folks into projects
  • caught staging/prod discrepancies an api → Things worked in the tests, but cried in prod. Now you can repro and discuss, you have proof. It’s a little surprise when it happens, but a lot of times these would happen and go undocumented/discussed. Once you can repro (by running the tests) you are playing a much better game imo :).

Downsides: Usually these tests hit test environments/sandboxes, which in turn are NOT prod. Even though these have given me the most confidence, you can still get little surprises in prod. As usual, make sure you have log and metrics to give yourself a better shot of handling the surprises, because they will come :slight_smile:

I’ve also seen a successful mix of Bypass and integration tests: Even though there is no switch there (there is no live impl for Avalanche) but we achieve the integration/live tests by not intercepting the request. It works well in this case.

PS: Oh, I mention integration tests and I realized that this is an overloaded term. Much like a mock → in some contexts/communities means something to some people and others it’s different. So maybe a better name for these tests would be live unit tests, but naming things is hard.

1 Like