Mock is crashing process in umbella project

Background

I have an umbrella project, where I run mix test from the root.
In one of the apps, I am mocking the File module using the Mock library.

Problem

The issue here is that when I run mix test the process dies, with no error message to show:

Manager.Impl.Store.ReaderTest [test/unit/store/reader_test.exs]
  * test list_syndicates/1 returns the list of all known syndicates [L#496]** (EXIT from #PID<0.98.0>) killed

Code

The code of the test is as follows:

defmodule Manager.Impl.Store.ReaderTest do
  use ExUnit.Case, async: false

  alias Manager.Impl.Store.Reader

  import Mock

  setup do
    %{
      file_io: File,
      paths: [syndicates: ["syndicates.json"]]
    }
  end

  describe "lists syndicates" do
    test_with_mock "returns the list of all known syndicates", %{paths: paths} = deps, File, [],
      read: fn _filename -> {:ok, "[\"utc\"]"} end do
      # Act
      actual = Reader.list_syndicates(deps)

      expected = {:ok, [%Syndicate{name: "UTC", id: :utc, catalog: []}]}

      expected_path = Path.join(paths[:syndicates])

      # Assert
      assert actual == expected
      assert_called(File.read(expected_path))
    end
  end
end

In comparison, the follow test (which does not use mock) works just fine:

defmodule Manager.Impl.Store.ReaderTest do
  @moduledoc false

  use ExUnit.Case, async: false

  alias Manager.Impl.Store.Reader

  import Mock

  setup do
    %{
      paths: [syndicates: ["syndicates.json"]]
    }
  end

  describe "list_syndicates/1" do
    defmodule FileMockListSyndicates do
      @moduledoc false

      def read(path) do
        assert path == "syndicates.json"
        {:ok, "[\"utc\"]"}
      end
    end

    setup do
      %{
        file_io: Manager.Impl.Store.ReaderTest.FileMockListSyndicates
      }
    end

    test "returns the list of all known syndicates",
         %{paths: paths} = deps do
      # Act
      actual = FileSystem.list_syndicates(deps)
      expected = {:ok, [%Syndicate{name: "UTC", id: :utc, catalog: []}]}

      # Assert
      assert actual == expected
    end
  end
end

To me this is rather surprising. One alternative crashes the process with no error message, while the other makes everything work.

To me, this indicates one of three problems:

  1. A problem with the library Mock
  2. A problem with my setup of the library
  3. A problem with the test that causes the process to crash

I believe the second and third options to be the most probable, but without any information about the error, I can’t be sure. The process simply dies.

Question

Why is my process dying, and how can I fix it?

We ended up refactoring from Mock library to Mox on my last project, even though I’ve heard there is even a better abstracted library built on top of Mox.

From what I understand Mock is using some kind of runtime infrastructure (maybe genservers) to replace and validate the calls, which not only results in inability to have async tests, but also can result in timeouts, the timeouts most probably coming from those underlying genservers.

The main issue I have with Mox is that it forces me to define behaviours for everything. Consider my use case, where you need to Mock File because you don’t want to read/write into the real file system.

According to my understanding, I would have to create a boilerplate behaviour for File, and then use it from there. This just adds more maintenance cost with no benefit, only for the sake of using a library.

You are probably talking about Hammox.

I was afraid of this too. Perhaps it is a good idea to post this in the GitHub issues page of Mock as well.

Mock depends on :meck, which does swap out the complete module within the runtime – as in make the VM unload the existing module and load the module with the mock code. That architecture cannot support concurrent tests. The best improvement to get would be better errors or disclaimers.

What you call an issue is imo a good driver for well rounded mocking.

There’s the guideline of “don’t mock what you don’t own”. You don’t own the API of File – the core team does. Elixir does well with not doing backwards incompatible changes, but they could always add new return values and such. You might not become aware of those additional return values, so you won’t be testing for those, which might break your code in production while even well setup tests – working on an incomplete assumption of the interface – would suggest everything is fine.

Instead you want to create your own interface (in the form of a behaviour), around the actual usecases you have for interacting with the filesystem. Let’s call it MyApp.FileStorage. Then you own the interface between your code and the underlying implementation using File’s API (MyApp.FileStorage.LocalFiles), as well as the implementation you use in tests (MyApp.FileStorage.Mock).

Changes in File’s APIs then no longer affect your mocked interface MyApp.FileStorage. They only affect the implementation MyApp.FileStorage.LocalFiles, which you hopefully tested separately without mocks to ensure it works correctly. Those tests hopefully fail before you push to production. All tests using the mock implementation would be unaffected.

One sideeffect of that approach is also that your interface might becomes smaller. Instead of the whole File API you’ll likely shrink the mocked interface to a few more select things your codebase actually needs, potentially even shrinking the number of possible parameters and return values as well. Complex tasks, which require multiple calls to File API might become a single callback on your behaviour, again simplifying the interface and how much work it would be to mock.

1 Like

That is fine by me. I don’t need things to run in parallel or to be faster. At this point, I just need them to work. Disclaimers and error messages would be of great help here.

Let’s assume for the sake of this conversation, that the module in question is indeed MyApp.FileStorage.

I am testing that module. Now the argument I get from people with this point of view usually is:

To test MyApp.FileStorage (which uses File) you need to do tests that use the real thing only!

And that is fine, if you can do it. Imagine however that File actually makes HTTP requests, or that you have to pay for each call, or that you have a limited number of calls you can do, etc … What do you do now?

In the article Jose wrote about Mox (Mocks and explicit contracts - Dashbit Blog) he recommends a dummy to solve this problem, which is basically mocking but more complex (he suggests Bypass library for this purpose).

Please do note that I am not stating that your opinion is invalid. I very much agree with your opinion to a certain extent. However, in this specific use case, I don’t think it applies.

Yes, I incur in the typical issue of having tests passing and an application not working. But given that I cannot use the real thing, I genuinely think there is no way around the problem.

So my focus is on “How can I get this working”.

I also want to state I very much appreciate your contributions to my topics, so please do not feel discouraged if sometimes I deviate from them. They still add a lot of value.

So you cannot use automated tests – that’s a real and valid limitation. But even if you cannot have that specific benefit of testing the production implementation (which you couldn’t test anyways) you still get all the benefits of having the mocked interface not mapping to that external http API.

The only thing changes in that http api can break is your implementation of your own interface with said http API. It however cannot change your interface itself and to a certain degree your mock can get away not being affected as well.

There’s a few ways your external http api can change:

  1. Endpoints change, but there are alternatives – potentially doing multiple requests – still using the same data as before, ultimately returning all the date you need.
    1. HTTP API is the interface → You need to adjust all code/tests/mocks interacting with that externally supplied interface
    2. You have your own interface → You adjust the implementation with the http API only.
  2. Endpoints change and either need more data than before or no longer return data you depended on before
    1. HTTP API is the interface → You need to adjust all code/tests/mocks interacting with that externally supplied interface
    2. You have your own interface → You need to adjust your interface by the minimal amount of changes the http api changes pushed onto your interface (might just be a subset of the http api changes)
  3. Stateful interaction between multiple API calls change
    1. HTTP API is the interface → You need to adjust all code/tests/mocks interacting with that externally supplied interface
    2. You have your own interface → If those changes are contained within a single callback of your interface, then only the implementation with the http api might need to change. If it affects interaction between multiple callbacks of the interface, then any interaction with your interface might need adjustment. You might however also be able to just adjust the implementation with the http api by introducing your own stateful handling correcting for the changes vs how your interface is used.

Actual real life changes might be a combination of those cases.

Having you own interface here is therefore an effective means of limiting the effects external changes can have onto your system. That’s essentially the idea around anti-corruption layers you might be reading about in hexagonal or onion architecture, though they usually explain that in regards to data not computation.

1 Like

Here is, I believe, where our opinions diverge.
I can use automated tests, with the caveat that they will not be reliable 100% of the time.

I’d rather have tests that warn me of something wrong in my code 90% of the time than have nothing at all.

This is not perfect and opens a pathway to false positives, (test passing while application not working in reality because external API changed) as you correctly pointed out.

However, even though I am familiar with API changes and how painful they are to deal with, I find it that in the projects I am currently involved with, the external APIs used don’t change at such a rate that I would rather discard them altogether. They do change every now and then. But it is not so often that I would rather not incur the risk of false positives.

Your experience may differ, you may find yourself in a position of constant and unmitigated change. In that scenario, I would also likely adopt your strategy.

This is also why I am trying to find a fix for this issue using Mock. I find the external API I am using (here represented by File) to be stable enough.

An excellent summary that I agree with!
Will surely like this post, I can see it having a lot of use for future discussions!

Quite an interesting take on a concept I am most fond of !

I certainly have made my point and I’ll stop adding comments in regards to that going forward. Though I’d argue that my points will aid in having more code under control, and therefore being able to be tested as much as possible, rather than less. The gaps will be exactly the impossible to cover gaps one has with any approach. So I think we do align on the goal.

2 Likes

At the time of this writing, I have tested all major mocking libraries (Mock, Mox and Mimic).
Both Mock and Mimic rely on :meck underneath. Rather surprisingly, I get the exact same issue with Mimic as I got with Mock, i.e., the process crashes without warning.

This leaves me to the only logical conclusion, which is that the problem I am facing is related to the underlying system that servers as a base for both libraries: :meck. This issue does not happen using Mox.

I have therefore decided not to use Mock (nor Mimic) for the application and I am instead injecting the dependencies directly into the functions that need them. This last approach is very lightweight and does allow for async: true which gives a noticeable speed increase when running mix test.

Because I am only using a very small portion of the external system’s API (2 - 3 functions) this fits well with my needs. However, If I were to use all functions from said API (let’s say 20) this solution would be rather difficult to manage.

I don’t expect the affected modules to evolve in such a direction, so for the time being, I am rather happy.

Here is a sample test for those searching for inspiration (using File as an example):

setup do
  %{
    paths: [products: ["products.json"]]
  }
end

test "returns list of available products from given syndicate", %{paths: paths} = deps do
  read_fn = fn filename ->
    assert filename == Path.join(paths[:products])

    {:ok, "[{\"name\": \"utc\"}]"}
  end

  deps = Map.put(deps, :io, %{read: read_fn})
  syndicate = Syndicate.new(name: "UTC", id: :utc)

  assert FileSystem.list_products(syndicate, deps) ==
           {:ok, [Product.new(%{"name" => "Bananas"})]}
end