Conflicts between Mox, behaviours and optionnal arguments

I have a behaviours with lots of functions. Each pass an arguments and an optional keyword list of options. And I found myself in this weird situation where testing with Mox introduce some unsatisying results.

A reduction of the problem :

defmodule MyBehaviour do
  @callback greet(name :: String.t(), greeting :: String.t()) :: String.t()
end

defmodule MyModule do
  @behaviour MyBehaviour

  @impl true
  def greet(name, greeting \\ "Hello") do
    "#{greeting}, #{name}!"
  end
end

And a test like this

import Mox

defmock(MyMock, for: MyBehaviour)

MyMock
|> expect(:greet, fn name -> "Hello #{name}" end)

result = MyMock.greet("World")

It fails because the mock does not know any function of arity 1 called greet.

** (ArgumentError) unknown function greet/1 for mock MyMock
    (mox 1.2.0) lib/mox.ex:681: Mox.add_expectation!/4
    (mox 1.2.0) lib/mox.ex:549: Mox.expect/4

If I test instead

import Mox

defmock(MyMock, for: MyBehaviour)

MyMock
|> expect(:greet, fn name, _ -> "Hello #{name}" end)

result = MyMock.greet("World")

It obviously fails, because the function with one argument is never mocked.

** (UndefinedFunctionError) function MyMock.greet/1 is undefined or private. Did you mean:

      * greet/2

    MyMock.greet("World")

Which leaves me with two choices (I think) :

  1. Duplicating all callback with and without the optional arguments (sad when there is a lot of callbacks)
  2. Removing the “optional” from the arguments and always calling the two arity functions.

I chose option 2 with a lack of enthusiasm, anyone in the same situation chose a different solution ? Or had a better way of dealing with the situation ?

2 Likes

You’re essentially running into the fact that behaviours do not have optional parameters. There’s just fixed arity callbacks. Mox being driven by behaviours inherits that. But you could always place an interface with optional parameters in front of code calling only the “fully arity” behaviour callback implementations. That’s e.g. how ecto does it.

E.g.

defmodule Greeter do
  def greet(name, greeting \\ "Hello") do
    Application.get_env(:myapp, __MODULE__).greet(name, greeting)
  end
end
1 Like

I guessed there was not much more to find here.

Any thoughts on why callbacks were never implemented with optional parameters ?

1 Like

Optional parameters are a compile time construct of elixir. Behaviours are an erlang level feature, which existed even before elixir existed. Generally I’d keep the interface to multiple implementations as simple as possible and handle the optional stuff before or after that interface.

2 Likes

Expanding on the above, default args are not actually “real”. They are just syntax sugar for additional function heads.

I suppose that sugar could be extended to callbacks, though it might be a bit confusing to generate multiple callback “heads”. The abstraction starts to leak.

And who defines the default arg’s value? The callback, or the implementation? It’s messy.

2 Likes

you can try to use GitHub - edgurgel/mimic: A mocking library for Elixir
there is no mess with creating behaviours only for testing purposes :smiley:

2 Likes

Would you be able to define a behaviour like this?

defmodule Behaviour do
  # Invalid syntax
  @callback greeting(name :: binary, greeting \\ "hello" :: binary)
end

If yes then there is no built in mechanism to automatically pass the arguments into the implementation modules

defmodule Impl do
  @behaviour Behaviour
  @impl true
  def greeting(name, greeting), do: "#{greeting}, #{name}!"
end

Because the behaviours are not called, your code will call the module directly:

Impl.greeting("World")

So the Impl module must have a greeting/1 function.

You can of course define different functions in the behaviour

defmodule Behaviour do
  @callback greeting(name :: binary)
  @callback greeting(name :: binary, greeting :: binary)
end

And who defines the default arg’s value? The callback, or the implementation? It’s messy.

Yes and there is no mechanism to define default values from outside of a module, so it must be the implementation module.

A pattern I see a lot is using the behaviour as the API:

defmodule Behaviour do  
  @callback greeting(name :: binary, greeting :: binary)

  def greeting(module, name, greeting \\ "Hello") do
    module.greeting(name, greeting)
  end
end
1 Like

In my humble opinion, this sentense puts the cart before the horse. Like one never tests private functions, one arguably should not mock anything but behaviours. Mocking arbitrary functions just makes zero sense, simply revealing the design flaw, which would definitely raise later.

Interesting point!
Let me explain my use case:
I often use it for unit tests where I want to focus purely on the logic inside a single module. For example, I don’t want to spin up the processes that call HTTP services or the database - those dependencies would each need their own mocks every time I test a simple function like calc/0.

def calc() do
      # requires multiple processes 
      # that perform (via these processes) some HTTP requests and database queries
      points = Foo.points() 
      Enum.sum(points)
end

# A simple unit test example that doesn’t require mocking up half the world 
# or adding a behaviour for Foo module just for testing purposes.
   
   use Mimic
   Mimic.copy(Foo)

   test "... calc the sum of the user’s points" do
      expect(Foo, :points, fn -> [1, 1, 1] end)
      assert 3 == Boo.calc()
   end

For that unit test, I’d rather just mock the return value of Foo.points/0 instead of setting up deeply nested mocks for everything under it or adding a new behaviour solely for test purposes.
of course those dependent processes are already covered by integration test validating that everything works together :smiley:
So in short: Mimic is convenient when I want to test simple logic without introducing extra behaviours or large amounts of test setup with Mox for deeply-nested behaviour based (http/db etc) dependencies
Let me know if I’m missing sth :smiley:

1 Like

Well, I might not be the best person to discuss testing techniques with in the first place. I was always loudly against the “test for the sake of tests,” and 100% coverage, and whatnot on that matter.

I have a strong opinion that while unit tests are great during development stage, they are a burden for both regression and refactoring. I never need to constantly validate that calc/0 returns 3. It just makes no sense, because if it did, I’d write it as def calc, do: 3.

What makes sense to test, is …ahem… how your app behaves. What is literally covered by erlang behaviour paradigm. Your Foo.points/0 should not be deadly nailed, because tomorrow you’ll discover a better, faster, or cleaner way to calculate points, and you surely won’t want to overwrite the existing implementation, at least during development and for benchmarks.

Every time I find myself unit-testing a function in the wild, I ask, why, for God’s sake I want to bring a fragile test, depending on my current implementation, to the test suite? The answer is always either “Huh, that’s a behaviour then,” or “I should make this function private and test its implication.”

1 Like

Me too! :smiley:

What makes sense to test, is …ahem… how your app behaves.

100% agree!

I never need to constantly validate that calc/0 returns 3

That unit test checks if calculations are correct - you can refactor calc function later - but the test will stay the same - calc function for same data should return the same output - and testing if calculations are correct it’s worth to check in unit test I guess :thinking:

Maybe a more abstract example would illustrate it better, though it still might be a bit lacking - sorry.

def calc() do
      # requires multiple processes 
      # that perform (via these processes) some HTTP requests and database queries
      data = Boo.data()
      # calc algorithm
      ...
end

test "for dataX should return respX" do
      dataX = ...
      expect(Boo, :data, fn -> dataX end)
      assert respX == Boo.calc()
end

why, for God’s sake I want to bring a fragile test,

In my experience, clean, small, fast, async unit tests are far less fragile than test suites that require dozens of deeply-nested behaviour mocks just to verify some calculation logic/ output :thinking:
but I might be wrong
I see u don’t like unit tests and you have every right to :ok_hand:

Absolutely. John Hughes with other smart guys even invented property-check tests for that. Andrea brough it to elixir world with stream_data. Validating the one outcome for the one income would not do that, though.

That’s a false dichotomy. I am not advocating for running a spaghetti monster. I am saying that if you need/want to test the function for the outcome, this function always deserves to be a part of some behaviour. That way you keep your code organized, contrary to a bunch of isolated functions, stuck to their impls.

Maybe it’s just my perspective, but I usually introduce a behaviour only when I need more than one implementation of the same “behavior”. Adding a behaviour solely to enable mocking in tests has always felt a bit off to me.

e.g.

defmodule Boo do
 use Genserver
 
 def get_market_data() do
    ....
 end
end

vs

defmodule BooBehvaiour do
  @callback get_market_data() :: data
end

defmodule Boo do
 use GenServer
 @behaviour BooBehvaiour
 
 @impl
 def get_market_data() do
    ....
 end
end

Reading our discussion I think there’s also a chance we’re talking about two different things :sweat_smile:

Could you walk me through what I should do if I want to test a piece of logic that depends on Boo.get_market_data/0, but I don’t want to start a process or mock all the underlying HTTP/DB calls? I’d just like to mock the return value of get_market_data() function to keep the test simple and async - how would you approach that? adding behavior to Boo?

1 Like

I think it’s an architecture problem.

I would fix it like this:

def calc(data) do
  #
end
assert 3 = calc([1, 1, 1])

Yeah, the example isn’t ideal - we can move logic out to separate function and just pass data to it - I know.
I just meant a scenario where you have logic that depends on other processes making HTTP/DB calls, but I didn’t want to introduce all that complexity into a simple forum example - that’s why I simplified it to just calc function :smiley:

I always find this argument so funny given a test implemention is literally an additional implementation.

As for the “fetching data problem”. Why does this need to happen inline in the function doing the calculation. As @lud pointed out making data fetching and calculations independent make the whole problem of testability go away while imo aiding the composability of the business logic. Boundaries is the long from version of that idea which goes into more detail.

I mean how to test a function that relies on data from process without spawning that process, and moving each time calculations/checking to do_function
e.g
again - that is just very simple example

def foo() do
  case Boo.get_marekt_data() do
      ...
  end
end

vs

def foo() do
  data = Boo.get_marekt_data()
  do_foo(data)
end

def do_foo(data) do
  case data do
    ...
  do
end

Do we really need to introduce behaviours in such cases and use Mox - instead of Mimic.
Sorry but I don’t get it :folded_hands:

1 Like

I always find this argument so funny given a test implemention is literally an additional implementation.

Adding a behaviour solely to enable mocking in tests has always felt a bit off to me.
But maybe I’m missing sth - could u explain why adding behaviour solely for testing purposes is better than just stub/expect implementation in test via mimic?

To me the implementation details do not matter – you could use Mox, Mimic or whatever. They all eventually do the same - change the code that gets executed. Once you do that you by definition have at least two implemetations for the function call being made. Personally I see the need for this as a fact informing design. Often code can be restructured to not need any mocking – though the more external IO you do vs. actual internal logic. That’s what the video I linked earlier is all about. This also doesn’t mean you should never mock, but to me mocks are mostly needed for integeration level tests, where you indeed want to test the last bits of wiring between simpler to test pieces.

I see :ok_hand:

Once you do that you by definition have at least two implemetations for the function call being made

Yes, that was my goal - to introduce a boundary and test the logic in isolation from things like e.g. processes. Mox and explicit behaviours can achieve that, but in cases like this they feel like unnecessary boilerplate and add maintenance overhead for code that doesn’t really provide much value I guess :thinking:

Thanks!