Testing a controller that makes outside api calls

I did have one problem with this approach - which was getting my api module to use the Bypass endpoint. It’s buried a few dependency layers beneath my tests, and I didn’t want to pass params all the way through.

My first stab was to use Application config to allow a url override, which I set at runtime in tests. Then (duh!) I realised this messed up async tests. Not wanting to spend more time on this for now, I’ve turned off ExUnit async for those tests, but I consider that a workaround rather than a fix. I’ve found since there’s some discussion of this exact problem at Bypass and async tests with ex_unit

My first stab was to use Application config to allow a url override, which I set at runtime in tests. Then (duh!) I realised this messed up async tests.

Any reason why you can’t configure the url globally in config/test.exs ? If there is no reason, then that’s the easiest place to configure it, and tests should be able to run in async mode. If you can’t, then you’ll have to disable async for those tests or resort to the fancier solution suggested in the thread: Bypass and async tests with ex_unit - #9 by josevalim. (I never tried it out, btw)

1 Like

The problem is the port each bypass process listens on. Configuring this globally results in async tests routing requests to the wrong bypass.

To save any more time spent messing about with this, I did initially set async to false. Then it nagged at me! So I went back and implemented something inspired by Jose Valim’s solution, which works fine.

Glad to hear you solved your issue! :+1:

Still, if you find some time, could you please elaborate a bit more on what your problem exactly was? I’m curious because I always use the config-based approach and never had any issues.

Sure. Bear in mind I’m new to most of this, so I might easily have missed something. But AFAIK I can’t put the bypassed url into the compile time test config (ie. config/test.exs), as I can’t get the port from bypass until runtime.

My initial approach was to try something like this in my ExUnit Case:


setup do

  bypass = Bypass.open()

  url = "http://localhost:#{bypass.port}"

  Application.put_env(:app, :api_endpoint, url)

 

  Bypass.stub(bypass, "GET", "/checklists/1.json", fn conn ->

    Plug.Conn.resp(conn, 200, TestData.Load.list())

  end)

  {:ok, %{api_token: "token"}}

end

… and then get the url out of the Application config in my API client module.

The problem here is that with the tests running async, the global :api_endpoint config writes get interleaved unpredictably, so the client module requests go to the wrong ports & hence wrong stubs. The result is failing tests due to unexpected or no responses. Async and global rarely mix well …

This is the same issue axelson refers to here: Bypass and async tests with ex_unit - #3 by axelson

(Presumably, as you haven’t had any such problems, your setup must be a bit different from the above.)

1 Like

Yes, you can :slight_smile:

This is what I do:

config/test.exs:

config :app, :api_endpoint, "http://localhost:9999"

In your test:

setup do

  bypass = Bypass.open(port: 9999)
  ...
end

As you can see, by passing the port option, you can tell Bypass which port to use. This allows you to configure the URL globally. This test can now run asynchronously with other tests, as it doesn’t need to overwrite the global test configuration.

I hope this helps.

1 Like

Thanks, yes, I had read in the docs you could pass a port, but need my tests completely isolated as some need different responses to the same calls.

Actually my first thought reading your post was ‘that couldn’t possibly work’ as multiple processes couldn’t bind the same port. But then I remembered these aren’t OS processes of course. I don’t know much about BEAM internals, but presumably it multiplexes over a single OS thread bound to a port. In any case, a port per Bypass process suits my situation better, at the cost of a little additional complexity.

To clarify: there’s never multiple processes attempting to bind to the same port in my example. There’s only one test process using port 9999, and that’s the one testing the API module.

1 Like

Question

What are your thoughts of stubbing out all of the functions in a mock module with an InMemory implementations in another module? Instead of sprinkling fixtures across multiple test cases?

So the idea is I have a Client for the live version but I also have an InMemory module that will respond with fixtures of JSON I have downloaded.

For example

Let’s say I have this controller I need to test and I need to mock MyApp.Api.get_tweet/1 that is nested inside.

defmodule MyAppWeb.HomeController do

  use MyAppWeb, :controller
  action_fallback MyAppWeb.FallbackController

  def index(conn, params) do
    with {:ok, tweet} <- MyApp.Api.get_tweet(params["user_id"]) do
        render(conn, "index.json", tweet: tweet)
    end
  end

end

ApiBehaviour - The behaviour

defmodule MyApp.Api.ApiBehaviour do
  @callback get_tweet(user_id :: String.t()) :: tuple()
end

Client - Live implementation

defmodule MyApp.Api.ApiClient do
  @behaviour MyApp.Api.ApiBehaviour

  @impl MyApp.Api.ApiBehaviour
  def get_tweet(user_id) do
    # omit ... fetches the live server
  end
end

InMemory - Fake response implementation

defmodule MyApp.Api.InMemory do
   @moduledoc """
   The response from  here are real JSON responses that
   have been downloaded and are accessed from the fixtures
   in the test directory.
   """
  @behaviour MyApp.Api.ApiBehaviour

  @impl MyApp.Api.ApiBehaviour
  def get_tweet(user_id) do: {:ok, user_json_response()}
  # I just need a user_id with 9999 to generate an error
  # Does not imply that a user_id of 9999 will return an error
  # I don't know if I should do this here
  # I just want a canned response of an error
  def get_tweet(user_id) when user_id = 9999 do: {:error, :not_found} 

  defp user_json_response() do
    # put stuff in here that you got from the actual live / production API
    # real JSON response would appear here
  end
end

The Context/Bound

defmodule MyApp.Api do
  def get_trip(user_id), do: api_impl().get_tweet(user_id)
  defp api_impl(), do: Application.get_env(:my_app, :api)
end

/test/test_helpers.exs

Mox.defmock(ApiMock, for: MyApp.Api.ApiBehaviour) # <- Add this
Application.put_env(:my_app, :api, ApiMock) # <- Add this
ExUnit.start()

Using Mox.stub_with/ within setup

In my setup I can use Mox.stub_with/2 and put MyApp.Api.InMemory and write tests that would access the fixtures responses from MyApp.Api.InMemory.

I can even swap out MyApp.Api.InMemory for MyApp.Api.ApiClient to try out the real implementation.

defmodule MyApp.HomeControllerTest do
  use MyApp.Web.ConnCase

  setup do
    Mox.stub_with(ApiMock, MyApp.Api.InMemory)
    :ok
  end

  describe "GET /100" do
    test "success, it gets the tweet", %{conn: conn} do
        conn = get(conn, "/100")
        assert conn.params["user_id"] == "100"
        assert {:ok, _} == json_response(conn, 200)
    end
    test "error, for unknown", %{conn: conn} do
        conn = get(conn, "/9999")
        assert conn.params["user_id"] == "9999"
        assert {:error, :not_found} == json_response(conn, 200)
    end
  end

The MyApp.Api.InMemory module looks appealing as that module can be the one central place for placing my json responses. However something about it feels off. It’s too hidden maybe? I don’t have enough experience using mocks to know what I’m feeling.

Using expect

defmodule MyApp.HomeControllerTest do
  use MyApp.Web.ConnCase
  import Mox

  setup :verify_on_exit

  describe "GET /100" do
    test "success, it gets the tweet", %{conn: conn} do
    expect(ApiMock, :get_tweet,fn -> args 
        assert args == "100"
        valid_json_response()
    end)
        conn = get(conn, "/100")
        assert conn.params["user_id"] == "100"
        assert {:ok, _ } == json_response(conn, 200)
    end
    test "error, for unknown", %{conn: conn} do
      expect(ApiMock, :get_tweet,fn -> args 
        assert args == "100"
        error_response()
      end)
        conn = get(conn, "/9999")
        assert conn.params["user_id"] == "9999"
        assert {:error, _} == json_response(conn, 200) 
    end
  end

  # Put the response here
  defp valid_json_response() do
    # omit for more room
    # returning some {:ok, json}
  end

  # Put the response here
  defp error_response() do
    # omit for more room
    # returning some {:error, json}
  end

This approach requires you to put your responses in the test file, which can access the fixtures. Nothing wrong with this approach.

Feedback

I like the idea of stubbing out all of the functions in a mock module like MyApp.Api.InMemory with real JSON responses.

I know this approach isn’t a new concept, but am I maybe thinking about MyApp.Api.InMemory usage incorrectly?

Looking for feedback on what could go wrong in the future if I approach it this way?

1 Like

Way too much work. Let’s not forget that tests are just a dev artifact e.g. we are covering our own asses to make sure production doesn’t blow up in our faces.

Have a stringent approach to quality but also be minimal. Identify the places in the code that gives you the most anxiety and start with them.

Your approach is good. I like mox a lot but there’s also mock which I found extremely convenient. And then there’s also the idea of making your own mocks as you seemed inclined to do.

All are valid.

I had projects where Mox was too much trouble because adding contracts (@behaviour) post-factum to huge business logic modules was not trivial. But it’s pretty much perfect for the cases where you’re not interested in troubleshooting 3rd party API and are OK with your worker crashing and retrying a minute later. Which is no trivial amount of projects out there, by the way. For them Mox is a perfect fit.

I used Mock when I wanted to preserve maximum amount of the mocked library’s original code behaviour and only stub out the parts where it hits the network – Mox is not doing so well here, you can just ignore 1000 coding lines from the mocked library below and just return {:ok, list_of_structs} and be done with it. While that feels neat it might ignore too much context and it might conceal a bug you hit in production and can’t replicate locally.

I only once rolled my own mock and kind of regret it but it was a very interesting exercise. And it helped us back then because we wanted to ignore part of the library’s code but not all of it – for such partial applications Mox could have also worked by the way but it also meant we had to rewrite parts of the library and we didn’t want to do that.

So here’s the advice most programmers hate hearing: ItDepends™.

(RE: Mox’s helpers, I like both expect and stub.)

1 Like