Dependency injection/IOC

A popular Java and C# design pattern is runtime dependency infection.
People tend to associate it with Object Oriented programming but there is nothing inherent to OOP.

Runtime dependency injection/resolution can work for any system that provides a runtime mechanism to override or modify the version of a class or module which has been requested.

It strikes me that Mox provides a limited form of dependency injection in that sense.

Now I’m not advocating for it, but just for the sake of idle discussion (and I do know about Application.env and regular configuration stuff) has anyone ever attempted bringing Java style DI to Elixir?

I just pass modules as a function arguments where needed. In other cases I just do not bother at all.

5 Likes

In which cases would you pass a module as an argument? Do you mean a module when it’s a GenServer?

I think an example of this would be Ecto. In cases like Ecto.Multi, the repo module is passed around so you can make sure you’re using the same repo during a transaction: Ecto.Multi — Ecto v3.6.1

A case where I did that was when doing something similar to the strategy pattern, I had many modules that were candidates to perform a given action, and when one of them was picked, I passed them around to the functions that would use it

1 Like

For sure I do this: if you want to achieve testable code, it’s quite handy to be able to “inject” overrides as options. E.g. something like the following:

def fetch_page(url, opts \\ []) do
    parser = Keyword.get(opts, :parser, SomeParserModule)
    client = Keyword.get(opts, :client, HTTPoison)
    with {:ok, raw_html} <- client.get(url), 
        {:ok, parsed} <- parser.parse(raw_html) do
        {:ok, struct(SomeStruct, parsed)}
    end
end

This is over-simplified, perhaps, but if you don’t want to redundantly test a separate module that receives dispatches, then providing an injectable/overridable option lets you focus on one layer/module at a time.

Some purists may argue that this isn’t the same as DI, but it accomplishes the same goals and has the same intent as far as I’m concerned. Mox is certainly helpful in this regard, but it’s not strictly required.

1 Like

Beware that this approach converts compile-time errors (like passing the wrong number of arguments to a function) to runtime errors, and substantially reduces the usefulness of Dialyzer.

Wrapping the call can help some:

def fetch_page(url, opts \\ []) do
  with {:ok, raw_html} <- client_get(url, opts),
  ...
end

@spec client_get(String.t(), Keyword.t()) :: {:ok, String.t()} | {:error, any()}
defp client_get(url, opts) do
  client = Keyword.get(opts, :client, HTTPoison)
  client.get(url)
end

This allows Dialyzer to understand the type of client_get - but there is no checking that the type written in the @spec matches the types actually seen at runtime.

Usually, we have a Client behaviour here, declaring a type, and refine the spec to

@typet opts [{:client, Client.t()} | {atom(), any()}]
@spec client_get(String.t(), opts()) :: {:ok, String.t()} | {:error, any()}
2 Likes

There are libraries that have different “adapters” for use in production, development, and test environments. This is sort of similar to dependency injection.

Take the Swoosh hex library for sending emails. You set the adapter (a module that follows a behaviour) in configuration. In production, you choose an adapter corresponding to your email service — say, the SendGrid adapter. In test, you use the Test adapter which is wired up to allow you to perform assertions on whether an email was sent and if it contained the right data. In development, you use a special adapter that captures “sent emails” in memory in a process, so that you can go to a special local route and view what would have been sent were it production.

2 Likes

I think this is a problem of how to ensure the passed-in module has the expected @behaviour using typespec.

Freud intensifies :-))))

2 Likes

I was a fan of dependency injection containers when I was working with object-oriented web frameworks. Alternatives were a service locator or a global registry/singleton, which were both considered anti-patterns. When you work with a DI container, your classes typically keep the dependencies as state in the constructors.
In Elixir we just don’t need those anymore, it’s a just a different way of thinking and designing. We use config and functions arguments.

Even though the OP didn’t mention containers, I believe dependency injection is more talked about in OOP because it is harder to achieve.

And as an example of how DI is achieved in Elixir, take the injection of a time zone database to DateTime functions:

  • via configuration:
config :elixir, :time_zone_database, Tz.TimeZoneDatabase

or:

Calendar.put_time_zone_database(Tz.TimeZoneDatabase)
  • by passing the module name to the different functions:
DateTime.now("America/Sao_Paulo", Tz.TimeZoneDatabase)
1 Like

To be fair this is using the app env for storage and is therefore a singleton acting as a global registry.

GenServers keep the callback module in the process state – and even the callback module might maintain dependencies passed at the start via the process state – which is not too different from constructor based DI in OOP.

Yes, in elixir we might not need dedicated libraries, given we have different tools for maintaining state, but the general approaches are kinda the same.

1 Like

I was a fan of dependency injection containers when I was working with object-oriented web frameworks.

I was too, until I spent years of my life diligently writing unit tests, only to realize years later that those tests proved very little about the codebase.

Dependency injection is evangelized because it makes it easier to unit test with mocks.

Integration tests are vital. Unit tests are mostly useful for pure functions. Mocks lead to false confidence about the test suite.

When I came to Elixirland and learned to write mostly integration tests, I was blown away by the amount of real bugs the test suite caught.

7 Likes

Haha, too much time explaining the downsides of binary_to_term that day :crazy_face:

Great answers everyone, thanks for the thoughtful replies.

2 Likes

You can do it anywhere. It’s a pretty helpful pattern for, e.g. testing API clients – use the ‘web client’ for production/staging (and maybe dev too), and a ‘test client’ for (unit) tests.

A functionally similar kind of thing is to retrieve a module name from config during compilation (by storing it in a module attribute). That’s a little less flexible than just passing a module as an argument.

1 Like