Getting the data for Mox expectations for API mocks in a sane manner

My app is making a bunch of API requests to a 3rd party and I’m writing integration tests, thus mocking responses with Mox.

The way I write expectations is that I IO.inspect() every request by hand and copy the data (only needed parts) into the expectations. If I change a test or function i just disable the mock module, print out actual responses, copy-paste response to expectation, enable MockModule again. Takes forever, but is doable. Usually a change means getting one datapoint, not too bad.

So one day the underlying endpoint changes. The tests stay the same. The functions stay the same. But the responses change. Now if I want to keep my tests up-to-date all of the tests need new expectations!! I actually kept the tests unchanged until now, but on another day the signature of most of these functions have changed so I need the new responses across the board…

I’m pulling my hair out at this point.

I must be doing something wrong. There is no way this is how things are done - using debugging tools to fill in the expectations.

I can understand in this situation people may not mock the responses from the API’s, but the already parsed struct-like data instead. But i really want to test the whole processess from start to finish, kind of loses its meaning otherwise.

Is there another way to write expectations? And yeah, they are very specific, i’ve already used stubs as much as i can.

how can i automate this? tempted to just erase the tests and forget TDD (no, not really, but it’s a sad situation).

1 Like

What about writing a test that hits the real thing/a sandbox? You’d load real keys, make it hit the real endpoints. These tests would only be run locally and would be excluded from a normal mix test run (what CI would run).

It’s basically capturing what you did in your IEx sessions/running the real code (however you were using the IO.inspect/1 calls). I’ve seen these types of tests save a lot of time for teams. Mostly because it’s documentation and something that anyone on the team can run. No passing scripts around, etc.

More info: Mocks and explicit contracts - Dashbit Blog, the

Along these lines (the article does a great job at showing these):

defmodule IntegrationTests do
  @moduletag :integration_tests

  ...
end

# test_helper.exs
ExUnit.start(exclude: [:integration_tests])
1 Like

interesting read.

i think hitting the real API is not an option for me, because requests take time. wouldn’t want to wait a full 2 minutes before the tests finish. and also i want the CI tests to be most of the tests.

i’m not confident i understand what tests you mean here. what exactly saves time? do you mean when you write tests that hit API’s for real?

i’m going to give ExVCR another try. just can’t make it work properly.

Why do your expectations need to change? Naively I‘d say you don‘t want to assert on the response you get from a third party, but you want to assert on what those responses mean to your systems state. Otherwise you‘re testing the third party system not yours.

1 Like

Would Mneme - Snapshot testing tool for the busy programmer be of any help?

If not, the way you can automate expectations would be to have OpenAPI spec of the API you’re consuming and use that + macros to generate assertions. It would be tough to start with but doable.

Could you add a header or identifier somewhere in the requests ?

You enable some flag, disable the mock, run the test suite against the real API, but with an intermediary layer (at the call site or maybe a proxy in between, or maybe you keep the mock but they forward the call because of some flag). And you write each response to a file named after the identifier.

Then you disable the flag, back to normal mode. In this mode the mocks lookup the identifier from the request and return the response from the file.

So when expectations change, you just do that once. And you keep the response files in Git because they are needed in CI. And the diffs show you what response changed and how, and you can see if it seems fine or not.

okay i’ve thought about this long and hard. i keep feeling that i’m doing something wrong. so i spent days refactoring and simplifying the architecture and i think i got a clear picture now.

problem was that my requests were messy and there was no clear point to differentiate business logic from request stuff. so here’s how i’m thinking about it now:

i think this is ultimately the correct approach regardless how i test the business logic and API requests. i’ve simplified so i do have a single module for most of the requests now (that calls the complex stuff from there). and for a start i’ll just not run the API tests automatically and mock the business logic that happens after the reqests instead.

this is an interesting one. i do in fact want to test that my whole system works even if the 3rd party API changes, but you’re right that i don’t need to test that very often.

another issue here is that before business logic happens the API communication has complexity in itself already. say one function makes 3 requests and puts a single struct together that way. i do want to test this. but again, keeping business and API tests separate here i think is the solution.

oh my god this is awesome! however i don’t think it caches any requests like ExVCR so it’s not quite on topic. But I may use this in the future anyway, looks like a great idea.

this is a good idea in general, i think. but then it adds a lot of complexity that i don’t really want to spend my time on fixing later. also this is basically what ExVCR does (but they probably do it better) and yet i can’t really get that thing to work properly. and even when i do it’s kind of a black box and i’m not sure my own solution would be any better.

currently with my new clear way of separation of business and API concerns, i think i will need solution like this eventually. like if i want to run my tests more often.

edit: oh, i should add. i think i’m going add the flag (as a module property) to enable printing of responses from my central module, because i still need to write expectations to the tests.


thanks for chipping in everyone, you really sped up my thinking here. i will probably refer back here again and again in the coming years :smiley:

2 Likes

Probably but it relies on mocking where what I propose justs needs a if somewhere, which is much more simple and even if you are not confident in the fact that you can pull it out I’m sure you will very quickly see that you actually can.

Something like that could be a good start :

defmodule MyApi do
  case Application.compile_env(:my_app, :api_request_mode, :real) do
    :real ->
      def request(url, method, body, headers) do
        do_request(url, method, body, headers)
      end

    :record ->
      def request(url, method, body, headers) do
        file = hash_req(url, method, body, headers) <> ".json"
        resp = do_request(url, method, body, headers)
        record_response(resp, file)
      end

    :mock ->
      def request(url, method, body, headers) do
        file = hash_req(url, method, body, headers) <> ".json"
        mock_response(file)
      end
  end
end

No mocking library or macros to get in the way.

2 Likes

Very much that. Keep the code directly depending on the API shallow, e.g. just pulling out certain values out of individual requests and let the business logic/composition of those values work only with those known to exist values.

The more business logic you can decouple from the actual requests being made the simpler it will be to test.

1 Like

Sure, that’s why mock is better in these situations, compared to mox which I more view as a testing tool for stuff you control; mock I use more for 3rd party things (especially those that would hit the network).

And yep, this is crucial. I had former colleagues cringe at this coding policy but I convinced them after I demonstrated that this allows our tests to be much more precise and catch more potential bugs.

problem was that my requests were messy and there was no clear point to differentiate business logic from request stuff. so here’s how i’m thinking about it now:

Excellent! This is where Mox shines in my opinion. It gets you thinking about boundaries and contracts first and when you do that, the tests usually fall into place.

i’m not confident i understand what tests you mean here. what exactly saves time? do you mean when you write tests that hit API’s for real?

The time saved in my experience was between devs in bigger teams, not on machines/compute. The former is way more expensive than the latter :slight_smile: If you have a lot of people devving against an api, there are little setup “tricks” that usually get shared/passed around in slack/docs/or, going up in iex history :slight_smile: Adding that to a file that can be run by everyone saved time. It also helped us find a bug once (integrating with Quickbooks) where docs said one thing but their api said another. It’s just nice to have explicit, runnable code to not only show how one can dev but also assert on real/current behavior, not just that it worked at some point (staging/prod can still be different though).

FWIW this is my “TDD” process when integrating with web APIs:

  1. Launch console env and use my client lib of choice to test out calls to the various endpoints I will need
  2. Implement a NewAPIClient module that makes these calls I just tested manually, and does any data mapping to return data structured in a way that is clear and simple for my use case (possibly literally structs if called for, but usually just maps)
  3. Write tests for my business logic using Mox to stub out calls to my module with various response cases
  4. Implement business logic
  5. QA in staging env
  6. If necessary, repeat 1-5

Back in Ruby land I used ShamRack to add additional tests in 2, but as Bypass appears to be abandoned I have large given up on those, which I haven’t been bitten by so far because the minimal logic in the client tends to change very infrequently if at all, and requires manual testing in almost every case anyway.

1 Like

This is very similar to my process!

Except that I still test the client at point 2, typically using Bypass. I tried to use ExVCR but I found it to be noticeably slower than Bypass for my use cases. The upside is that writing tests takes typically less work.

I didn’t get the impression that Bypass was abandoned :slight_smile:

I mean, the repo hasn’t had a commit in 3 years…

The problem I ran into is that it just didn’t seem to work with Ranch 2x, but that is probably a discussion for a different thread.