Testing a controller that makes outside api calls

Excellent advice. This also encourages you to keep your mocks thin, and makes use of the API data easy to expand as needs dictate. Maybe this is just me, but I have a tendency to pluck the few values from the API return I think I need for my model, only to find later there’s gold there that I missed. With raw api returns available locally, everything’s always there to be mined later.

3 Likes

Yep. I’ve had 20MB JSON fixtures and just made utility functions to get pieces of them for smaller unit-testing purposes.

I went even further in one project: I made a special tag for certain tests (called it :exhaustive) and excluded it by default so we don’t overload our CI – but made it a policy to run the tests tagged with it once a week. They used the complete cached payloads from the live API, no matter how big they were (one was 215MB even; long live Zstandard level 19 compression and git lfs!).

A lot of teams grumble about this practice, for reasons they were never able to explain in a satisfactory manner to me, but I’ve uncovered a frightening amount of bugs in code just by using cached real data.

5 Likes

To Bypass or Mox
It can be overwhelming trying to understand the difference between whether I should use a Mox or a Bypass. And at the same time trying to understand behaviours. I got the weekend to further educate and explore. Not complaining, just a lot of concepts at once and they have subtle differences.

I like the idea of using a Behaviour (a contract), that I then implement in a module that then I can then mock in my tests.

I have download real json results that I can reuse as your suggestions. I keep them in my tests.

This community is awesome, appreciate everyones feedback and direction so far.

3 Likes

I sympathize, dude, but there simply are no shortcuts. At one point you have to roll up your sleeves and get your battle scars. Glad you are motivated to do it!

This community will be extremely helpful and supportive if you show that you’ve done your homework – or are willing to do it. So keep at it, you’ll be a master in no time!

3 Likes

There is also patch

It’s not specifically related to http but it could allow you to override (patch) the function that make the api call :wink:

Really nice lib btw

2 Likes

This is how I decide. If the API module that talks to the external API (this would be TwiterClient in your example) is part of my codebase, I write tests for it and simulate the interactions with the external API using Bypass. If the module is part of a library, then it’s (hopefully) already tested and so I don’t write any tests for it. When testing a consumer of the API module, I mock the API module.

This seems pretty straightforward to me, but I’d be interested to hear what other people on this thread have to say, maybe they have a different opinion.

When writing tests using Bypass, I can confirm that @dimitarvp’s suggestions of dumping response from the real API to generate fixtures is indeed a very good one! I’m glad that others are also doing it :slight_smile:

1 Like

If you can keep this in the back of your mind to check out at some stage (ie. not to add to the immediate overwhelm!), bear in mind there’s at least one other approach to creating a module ‘api’, using protocols. There’s a bit of back and forth about the pros and cons, and how they relate to mocking, here: Mox and Protocols? - #13 by svarlet.

How do you define ‘talks to the API’ here? If for example your module uses HTTPoison (or similar), do you consider this all under your control, so use Bypass, or would you mock out HTTPoison (or perhaps your HTTPoison.Base callback module)?

Background: I find myself uncertain in concrete cases, similarly to @neuone, re which approach to take (as I sometimes do with behaviours vs protocols). I tend to start by analogy with other languages I’ve used, but then the analogies don’t seem too convincing. I probably need to read more good Elixir code.

1 Like

If my module uses HTTPoison or Tesla, then I consider it to be the module talking to the API. HTTPoison and Tesla are convenience libraries that my module uses to send HTTP requests. However it’s my module that needs to decide which HTTP requests to send and therefore the one operating at the HTTP level and “talking to the API”.

As you mention, in this case, mocking HTTPoison or Tesla is an option. However, I don’t do that anymore and use Bypass instead. What convinced me is this article: Testing External Web Requests in Elixir? Roll Your Own Mock Server | by Sophie DeBenedetto | Flatiron Labs | Medium, from the same author of the post you previously shared. I recommend reading it and also José’s article about mocks linked from it (Mocks and explicit contracts « Plataformatec Blog).

2 Likes

Thanks, very useful links. Echoing the old ‘how can I know what I think until I see what I write?’ cliche, I’m not entirely sure about the Bypass approach, not having used it yet. But I just happen to be working on an api client, currently using Mox, so I’ll give this approach a try and see how it looks.

1 Like

I did have one problem with this approach - which was getting my api module to use the Bypass endpoint. It’s buried a few dependency layers beneath my tests, and I didn’t want to pass params all the way through.

My first stab was to use Application config to allow a url override, which I set at runtime in tests. Then (duh!) I realised this messed up async tests. Not wanting to spend more time on this for now, I’ve turned off ExUnit async for those tests, but I consider that a workaround rather than a fix. I’ve found since there’s some discussion of this exact problem at Bypass and async tests with ex_unit

My first stab was to use Application config to allow a url override, which I set at runtime in tests. Then (duh!) I realised this messed up async tests.

Any reason why you can’t configure the url globally in config/test.exs ? If there is no reason, then that’s the easiest place to configure it, and tests should be able to run in async mode. If you can’t, then you’ll have to disable async for those tests or resort to the fancier solution suggested in the thread: Bypass and async tests with ex_unit - #9 by josevalim. (I never tried it out, btw)

1 Like

The problem is the port each bypass process listens on. Configuring this globally results in async tests routing requests to the wrong bypass.

To save any more time spent messing about with this, I did initially set async to false. Then it nagged at me! So I went back and implemented something inspired by Jose Valim’s solution, which works fine.

Glad to hear you solved your issue! :+1:

Still, if you find some time, could you please elaborate a bit more on what your problem exactly was? I’m curious because I always use the config-based approach and never had any issues.

Sure. Bear in mind I’m new to most of this, so I might easily have missed something. But AFAIK I can’t put the bypassed url into the compile time test config (ie. config/test.exs), as I can’t get the port from bypass until runtime.

My initial approach was to try something like this in my ExUnit Case:


setup do

  bypass = Bypass.open()

  url = "http://localhost:#{bypass.port}"

  Application.put_env(:app, :api_endpoint, url)

 

  Bypass.stub(bypass, "GET", "/checklists/1.json", fn conn ->

    Plug.Conn.resp(conn, 200, TestData.Load.list())

  end)

  {:ok, %{api_token: "token"}}

end

… and then get the url out of the Application config in my API client module.

The problem here is that with the tests running async, the global :api_endpoint config writes get interleaved unpredictably, so the client module requests go to the wrong ports & hence wrong stubs. The result is failing tests due to unexpected or no responses. Async and global rarely mix well …

This is the same issue axelson refers to here: Bypass and async tests with ex_unit - #3 by axelson

(Presumably, as you haven’t had any such problems, your setup must be a bit different from the above.)

1 Like

Yes, you can :slight_smile:

This is what I do:

config/test.exs:

config :app, :api_endpoint, "http://localhost:9999"

In your test:

setup do

  bypass = Bypass.open(port: 9999)
  ...
end

As you can see, by passing the port option, you can tell Bypass which port to use. This allows you to configure the URL globally. This test can now run asynchronously with other tests, as it doesn’t need to overwrite the global test configuration.

I hope this helps.

1 Like

Thanks, yes, I had read in the docs you could pass a port, but need my tests completely isolated as some need different responses to the same calls.

Actually my first thought reading your post was ‘that couldn’t possibly work’ as multiple processes couldn’t bind the same port. But then I remembered these aren’t OS processes of course. I don’t know much about BEAM internals, but presumably it multiplexes over a single OS thread bound to a port. In any case, a port per Bypass process suits my situation better, at the cost of a little additional complexity.

To clarify: there’s never multiple processes attempting to bind to the same port in my example. There’s only one test process using port 9999, and that’s the one testing the API module.

1 Like

Question

What are your thoughts of stubbing out all of the functions in a mock module with an InMemory implementations in another module? Instead of sprinkling fixtures across multiple test cases?

So the idea is I have a Client for the live version but I also have an InMemory module that will respond with fixtures of JSON I have downloaded.

For example

Let’s say I have this controller I need to test and I need to mock MyApp.Api.get_tweet/1 that is nested inside.

defmodule MyAppWeb.HomeController do

  use MyAppWeb, :controller
  action_fallback MyAppWeb.FallbackController

  def index(conn, params) do
    with {:ok, tweet} <- MyApp.Api.get_tweet(params["user_id"]) do
        render(conn, "index.json", tweet: tweet)
    end
  end

end

ApiBehaviour - The behaviour

defmodule MyApp.Api.ApiBehaviour do
  @callback get_tweet(user_id :: String.t()) :: tuple()
end

Client - Live implementation

defmodule MyApp.Api.ApiClient do
  @behaviour MyApp.Api.ApiBehaviour

  @impl MyApp.Api.ApiBehaviour
  def get_tweet(user_id) do
    # omit ... fetches the live server
  end
end

InMemory - Fake response implementation

defmodule MyApp.Api.InMemory do
   @moduledoc """
   The response from  here are real JSON responses that
   have been downloaded and are accessed from the fixtures
   in the test directory.
   """
  @behaviour MyApp.Api.ApiBehaviour

  @impl MyApp.Api.ApiBehaviour
  def get_tweet(user_id) do: {:ok, user_json_response()}
  # I just need a user_id with 9999 to generate an error
  # Does not imply that a user_id of 9999 will return an error
  # I don't know if I should do this here
  # I just want a canned response of an error
  def get_tweet(user_id) when user_id = 9999 do: {:error, :not_found} 

  defp user_json_response() do
    # put stuff in here that you got from the actual live / production API
    # real JSON response would appear here
  end
end

The Context/Bound

defmodule MyApp.Api do
  def get_trip(user_id), do: api_impl().get_tweet(user_id)
  defp api_impl(), do: Application.get_env(:my_app, :api)
end

/test/test_helpers.exs

Mox.defmock(ApiMock, for: MyApp.Api.ApiBehaviour) # <- Add this
Application.put_env(:my_app, :api, ApiMock) # <- Add this
ExUnit.start()

Using Mox.stub_with/ within setup

In my setup I can use Mox.stub_with/2 and put MyApp.Api.InMemory and write tests that would access the fixtures responses from MyApp.Api.InMemory.

I can even swap out MyApp.Api.InMemory for MyApp.Api.ApiClient to try out the real implementation.

defmodule MyApp.HomeControllerTest do
  use MyApp.Web.ConnCase

  setup do
    Mox.stub_with(ApiMock, MyApp.Api.InMemory)
    :ok
  end

  describe "GET /100" do
    test "success, it gets the tweet", %{conn: conn} do
        conn = get(conn, "/100")
        assert conn.params["user_id"] == "100"
        assert {:ok, _} == json_response(conn, 200)
    end
    test "error, for unknown", %{conn: conn} do
        conn = get(conn, "/9999")
        assert conn.params["user_id"] == "9999"
        assert {:error, :not_found} == json_response(conn, 200)
    end
  end

The MyApp.Api.InMemory module looks appealing as that module can be the one central place for placing my json responses. However something about it feels off. It’s too hidden maybe? I don’t have enough experience using mocks to know what I’m feeling.

Using expect

defmodule MyApp.HomeControllerTest do
  use MyApp.Web.ConnCase
  import Mox

  setup :verify_on_exit

  describe "GET /100" do
    test "success, it gets the tweet", %{conn: conn} do
    expect(ApiMock, :get_tweet,fn -> args 
        assert args == "100"
        valid_json_response()
    end)
        conn = get(conn, "/100")
        assert conn.params["user_id"] == "100"
        assert {:ok, _ } == json_response(conn, 200)
    end
    test "error, for unknown", %{conn: conn} do
      expect(ApiMock, :get_tweet,fn -> args 
        assert args == "100"
        error_response()
      end)
        conn = get(conn, "/9999")
        assert conn.params["user_id"] == "9999"
        assert {:error, _} == json_response(conn, 200) 
    end
  end

  # Put the response here
  defp valid_json_response() do
    # omit for more room
    # returning some {:ok, json}
  end

  # Put the response here
  defp error_response() do
    # omit for more room
    # returning some {:error, json}
  end

This approach requires you to put your responses in the test file, which can access the fixtures. Nothing wrong with this approach.

Feedback

I like the idea of stubbing out all of the functions in a mock module like MyApp.Api.InMemory with real JSON responses.

I know this approach isn’t a new concept, but am I maybe thinking about MyApp.Api.InMemory usage incorrectly?

Looking for feedback on what could go wrong in the future if I approach it this way?

1 Like

Way too much work. Let’s not forget that tests are just a dev artifact e.g. we are covering our own asses to make sure production doesn’t blow up in our faces.

Have a stringent approach to quality but also be minimal. Identify the places in the code that gives you the most anxiety and start with them.

Your approach is good. I like mox a lot but there’s also mock which I found extremely convenient. And then there’s also the idea of making your own mocks as you seemed inclined to do.

All are valid.

I had projects where Mox was too much trouble because adding contracts (@behaviour) post-factum to huge business logic modules was not trivial. But it’s pretty much perfect for the cases where you’re not interested in troubleshooting 3rd party API and are OK with your worker crashing and retrying a minute later. Which is no trivial amount of projects out there, by the way. For them Mox is a perfect fit.

I used Mock when I wanted to preserve maximum amount of the mocked library’s original code behaviour and only stub out the parts where it hits the network – Mox is not doing so well here, you can just ignore 1000 coding lines from the mocked library below and just return {:ok, list_of_structs} and be done with it. While that feels neat it might ignore too much context and it might conceal a bug you hit in production and can’t replicate locally.

I only once rolled my own mock and kind of regret it but it was a very interesting exercise. And it helped us back then because we wanted to ignore part of the library’s code but not all of it – for such partial applications Mox could have also worked by the way but it also meant we had to rewrite parts of the library and we didn’t want to do that.

So here’s the advice most programmers hate hearing: ItDepends™.

(RE: Mox’s helpers, I like both expect and stub.)