Testing functions that call 3rd party APIs

I am building an app that uses Stripe for payments. I’ve read about people’s opinion of mocking when it comes to Elixir, but I also know that having actual API calls within your test suite slow your tests down and also increase how many times the API is called if there is a limit. Is testing more there to ensure that changes within your app don’t break when refactoring? Or is it there to help ensure both my app and the API are working together? For example, I could create a mock server that acts as the API in the test environment. The mock server could respond to the same routes as the API, but with a mocked response. If the API ever changes the test won’t break, but if my application makes changes that break the current API implementation it would catch it.

So the question is, how do most people write test for modules that involve other APIs. My other thought was to create two separate test modules. One module that I have a specific @tag meant to skip the module during mix test but it will actually send request using our test secret key for Stripe to ensure proper integration. But still create a mock server that will run every time we run our tests.

Please let me know your opinion.

1 Like

I work on a project that is very much an integration layer between many 3rd party APIs (20-ish including stripe). All of our automated tests go through mocks in some form or fashion. Depending on how “old” a particular integration is we might use the hex package bypass in some cases, but we also have a home-built global mocking framework that prevents async tests and is based upon indirection in the Application.env. Lately we’ve moved to a struct-based command/token approach which I have discussed a bit publicly (https://github.com/gvaughn/quest).

In any case, we do not have automated tests that actually hit 3rd party APIs. We’re prepared to deal with a temporary production outage if a 3rd party changes their responses significantly. But if that is a high concern of yours, yes, I’d use an ExUnit @tag to manage when it is executed.


There is no silver bullet. I have a very personal take on testing, some people agree others don’t. What I can say in your case, is that you shouldn’t make your application dependent on Stripe, you should have a connector, a contract, and then have the Stripe client implement that contract. At least, this is the original idea Jose Valim has about testing in Elixir: http://blog.plataformatec.com.br/2015/10/mocks-and-explicit-contracts/

Thus, I believe the community would suggest something among the lines of:

  1. Define a contract
  2. Use a mock that obeys the contract in your tests

Me personally, I prefer constructor dependency injection. When I create my entities in Elixir, I simply pass them the set of functions they are going to use (I inject the dependencies). Then when I need to test, I simply pass dummy functions as dependencies, or better, I pass stubs and spies to make sure I am performing the external calls as I am supposed to.

This goes in line with behavioral tessting, some people don’t like it because it couples your test to your implementation a little bit more, but I find this is the best way to ensure every unit works as intended. As usual, YMMV.


I’ll add a third bullet point to your list:

3. Test implementations against the contract (if possible). This might include mock implementations as well.

If both stripe and a mock implementation behave the same one can confidently use the mock for all tests, which don’t specifically involve testing the behavioural contract of the real implementation.

1 Like

This is known as the Witness pattern in Functional languages. :slight_smile:

Can you link me to some articles about that? I only found links about the gospell and religion on Google :stuck_out_tongue:

I took this from an article written by Martin Fowler, where he defines 3 types of dependency injection. Constructor injection is one of them:

Totally unrelated but do you happen to have that Gateway module open sourced somewhere? I’m also building a system that will interface with multiple payment gateways and would be curious to see how you implemented it.

1 Like

Lol, yeah Google isn’t very good about searching for programming stuff, DuckDuckGo is actually a LOT better for searching for programming related terms, it even has whole search modes for it that Google doesn’t have. :slight_smile:

The most basic, direct, and unreadable thing would be from Haskells wiki:

Even OCaml has an example of them in the GADT section of the official spec: https://caml.inria.fr/pub/docs/manual-ocaml/extn.html#sec256

But in an Elixir world it would be like:

def blah(value, witness), do: witness.(value)

Or in a module form:

def blah(value, witness), do: witness.bloop(value)

In essence a witness is just a type or action that depends on a type.

Typeclasses, like in Haskell, are a non-generic form of witnesses. Like take this function in Haskell:

add :: Num a => a -> a -> a
add l r = l + r

The Num a is a typeclass, if you call this function like add 1 2 it will return 3 and if you call it like add 1.0 2.0 it will return 3.0, for any type that fulfills the typeclass Num 'a. However, look at the =>, that’s just a special operator in Haskell that means to ‘auto-fill what comes before’, let’s turn that back into a ->:

add :: Num a -> a -> a -> a
add w l r = (+) w l r

Now you have to call it like add (Num Int) 1 2 to return 3, that Num Int is the witness, the typeclass is essentially a module reified on that specific type based on the typeclass definition of it, and it passes that module in that location (it’s actually a record in Haskell, but whatever).

In other words, a witness just allows you pass an action over some other type into a function. Haskell’s typeclasses make it “baked in”, in that you can define a witness globally and it can be used globally, but you can’t change it, it is what it is defined as, where if it is something you have to pass in manually, as the OCaml ecosystem does, then you can change it at will, which makes, for example, mocking it absolutely trivial.

Passing in a module or a set of functions into something to operate on a value, whether also passed in or existing entirely internally, but is specified and handled by those functions, makes those functions/module a witness.

In Elixir, I keep my work program very segmented as lots of small dependencies, but they all share the main app’s Repo instead of their own by me defining the MyServer.Repo module in the global config for each dependency, they then access it just via Application.get_env/2 each time they need it (via a default option on function args, but close enough). In other words I am passing in a witness, the Repo, to ‘witness’ or operator over the data that is being processed, I.E. t he schemas and changesets. This is a pattern I use, probably excessively, because of my OCaml history (super common pattern there), and consequently it makes it sooo easy to ‘mock’ things without needing to grab a code generator like Mox or so.

As an addition, I heavily follow the pattern of almost every function taking a set of optional named arguments, like:

def blah_something(a, b, c opts) do
  repo = get_repo(opts)
  # Use stuff like `repo.insert/1`

Where get_repo is essentially this defined fairly globally included:

Application.get_env(...) || throw "Repo not set for #{...}"
def get_repo(params, opts \\ []) do
  key = opts[:key] || :repo
  cond do
    is_atom(params[key]) -> params[key] # This actually checks the basic structure of the module
    Application.get_env(...) -> Application.get_env(...)
    # other stuff
    :else ->
      Logger.error "blah"
      raise %NoRepoException{}

So I can define a repo globally for a dependency, or I can redefine it on a function-by-function basis, etc… Regardless, it’s just a witness being passed in ‘somehow’ that the functions use to perform work. It’s an exceptionally old pattern in ML languages. :slight_smile:


I don’t. What we use is quite project specific, plus if I were writing it today, I’d do it differently.

It’s a GenServer with a custom module we use to avoid some boilerplate. We use an erlang library called throttle which is a set of mnesia based counters so we can manage our outgoing rate limits. I like that and would reuse it.

We also use poolboy so we can limit the outgoing concurrency per service. If/when I revisit it, I’ll likely look at either parent or GenStage instead to manage this feature. This may be unnecessary in many applications, but it is baked into all of our Gateways.

The core client function exposed is SomeServiceGateway.dispatch(%Quest{} = q) which checks rate and concurrency limits, then ultimately calls Quest.dispatch(q) and logs errors and uses statix to send stats to Datadog. There’s retry logic for some status codes in some cases too.

I don’t want to hijack this thread any more, so feel free to DM or start a new thread if you want to discuss more.

1 Like

This post is a bit dated, but holy smokes, this problem never gets old (full disclosure: perhaps not everyone shares my zeal for tests).

To date, the best pattern I have found to really (really) test an app which relies on 3rd party services is simply to make good use of Elixir’s Application.get_env/2 and Application.put_env/3. It ends up working a lot like “service locators” in other languages (but I confess, the cleanest implementation I’ve seen of this personally was in PHP, either with the unfortunately named Pimple package or in the popular Laravel framework).

In a nutshell, you might take code like this:

StripeModule.make_api_call([foo: :bar])

and refactor it to something like this:

stripe_module = Application.get_env(:my_app, :stripe_module, StripeModule)
stripe_module.make_api_call([foo: :bar])

You should probably avoid Application.get_env/3 and instead explicitly list these 3rd party modules in your config somewhere because that will make the following pattern in your tests easier. In your tests, you can do this:

  setup do
    stripe_module = Application.get_env(:my_app, :stripe_module) #make sure you have the module for :stripe_module in your config!

    on_exit(fn ->
      Application.put_env(:my_app, :stripe_module, stripe_module)

This setup will ensure that the normally configured values always put back in place after a test. Why? Because in your tests, you can override those modules via Application.put_env/3.

The pattern I use is that I create some “mock” modules with the same function names as the original module (yes, this is kind of an interface, but since they are 3rd party modules, there’s no guarantee that they defined a behaviour). The mocked functions return values I captured when I was testing it for real. I might have a couple mocks for each module: one for each response that I need to test (e.g. success, failure, some other failure, etc).

My tests might end up looking something like this:

test "returns Stripe error when CC expiry date invalid" do
      Application.put_env(:my_app, :stripe_module, MyApp.Mocks.StripeModuleError1)
      assert {:error, _} = MyApp.function_which_uses_stripe_module()

test "returns some other Stripe error when yada yada" do
      Application.put_env(:my_app, :stripe_module, MyApp.Mocks.OtherStripeModuleError)
      assert {:error, _} = MyApp.function_which_uses_stripe_module()

test "returns ok for proper input" do
      Application.put_env(:my_app, :stripe_module, MyApp.Mocks.StripeModuleOk)
      assert {:error, _} = MyApp.function_which_uses_stripe_module()

Hopefully that strategy makes sense. An Elixir application does have some state with its config, and you can manipulate it via Application.put_env/3.

I still test the real 3rd party module, but I use a tag that I’ve configured to skip, e.g. @tag :external – my test_helper.exs has something like this:

ExUnit.start(exclude: [:skip, :external])

And my test might look like:

@tag :external
test "calls stripe API to something something" do
      assert {:error, _} = MyApp.function_which_uses_stripe_module()

@fireproofsocks That’s exactly the pattern I use. I’m a huge fan of ‘naming’ call modules in configs, it makes it so easy to test or swap out or wrap or whatever as needed.

I presume you use Mox.stub_with/2, though, as that’s async, and twiddling Application.put_env is not concurrency-safe.

No Mox, and I don’t mess with the environment for module names at run-time, I set them in the configuration files. If I want a module to be changed at runtime then I allow it to be passed in the options of individual calls instead. I have a whole delegation pattern I use for this. :slight_smile:

Ah yes, am I wrong that this is the pattern that Jose wanted us to use and then everyone was uncomfortable passing modules so Mox was created.

Not really. Mox handles the mocking part. How you make the mock used by your code is up to you to decide and as you said the app env is not safe for concurrent tests, while passing those into calls as parameter works.

1 Like

No, I was asking if the system being used is this one: "In fact, passing the dependency as argument is much simpler and should be preferred over relying on configuration files and Application.get_env/3" from the original mocks article: http://blog.plataformatec.com.br/2015/10/mocks-and-explicit-contracts/ that came out before Mox.

1 Like

Thanks a lot, this approach seems simple and worked fine on my app too.

I’ve been using this lib in some projects: GitHub - parroty/exvcr: HTTP request/response recording library for elixir, inspired by VCR.. It makes it really easy to test against real scenarios by processing real responses/ payloads.

1 Like