Modules don't compose well

Hi all,

I currently have this thought that relying solely on modules in Elixir hinders the writing of polymorphic code. Let me explain my thought process with an example.

Let’s imagine a shop project, using phoenix and ecto, and let’s roughly implement a route that creates a purchasable product. This has to follow some rules defined by the Sales and Strategy teams. For example, the name must be unique, there must be a sell by date no later than 3 months after creation, it has to be approved by the CEO before customers can purchase it, etc.

In the router:

post "/purchasables", PurchasablesController, :create

In the controller

def create(conn, params) do
  purchasable_creation_request = %PurchasableCreationRequest{
                                      name: params.name,
                                      approved: false,
                                      created_at: Timex.now()}
  case BusinessRules.Purchasables.create(purchasable_creation_request, PurchasablesGateway) do
    {:ok, purchasable} ->
      conn
      |> put_flash(:info, "Success")
      |> assign(purchasable, purchasable)
      |> render(:show)
    {:error, reason} ->
      conn
      |> put_flash(:error, "Creation failed because #{reason}")
      |> render(:create)
  end
end

Here we have the controller convert the http request into something the business rule can work with. The dependency and flow of control only go in one direction. While we have successfully split all the code into 2 modules, they are still tightly coupled.

If I look at the source code dependencies and the flow of control, they both go in the same direction:

  • The PurchasablesController uses BusinessRules.Purchsables.create/2
  • The PurchasablesController depends on BusinessRules.Purchsables.create/2

Concrete things use and depend on concrete things.

At the same time, it looks like BusinessRules.Purchasables.create/2 uses a gateway for purchasables but does not depend on a concrete one.

In a polymorphic setup, we would often observe that. The flow of control opposes the source code dependencies: it’s not because ModuleA uses Ecto that it has to depend on Ecto.

Interestingly this is very palpable in tests. Howvever, there is a variety of opposing opinions in this forum held for or against testing, for or against TDD, for or against mocking, … so I hope this topic is not going to deviate towards any of them.

Back to the example, if we wanted more polymorphism, we could loosen the dependency between the controller and the business rule:

In the router

post "/purchasables", PurchasablesController, :create,
  private: %{
    business_rule: BusinessRules.Purchasables
  }

In the controller

def create(conn, params) do
  purchasable_creation_request = %PurchasableCreationRequest{
                                        name: params.name,
                                        approved: false,
                                        created_at: Timex.now()}
  case conn.private.business_rule.create(purchasable_creation_request, PurchasablesGateway) do
    {:ok, purchasable} ->
      conn
      |> put_flash(:info, "Success")
      |> assign(purchasable, purchasable)
      |> render(:show)
    {:error, reason} ->
      conn
      |> put_flash(:error, "Creation failed because #{reason}")
      |> render(:create)
  end
end

So what happened here? While the controller needs a business rule, it doesn’t need to know which one it uses. I found this morning that the plug router now allows to pass stuff via conn.private to a plug (controllers are plugs). I thought it would be a better place to set the concrete business rule used by the controller. That makes my controller code easier to test as I can test the behaviours of the controller without involving the rest of the app like the business rule and the DB gateway.

Though it doesn’t feel quite right yet. Using the router for that feels like misplaced responsibility. I’d like to find a way to compose all the things together in a different way:

  1. make/create/start a purchasable gateway
  2. make/create/start a business rule with that gateway
  3. make/create/start a controller with the business rule
  4. associate the controller with a http route

Some might say it’s a OOP mindset to try to compose things together, but it’s not reserved to OOP, for example in F#: https://blog.ploeh.dk/2015/12/21/integration-testing-composed-functions/

Today, the community seems to prefer the use of Application config with module attributes everywhere there is a need for indirection.

For example, in the controller:

defmodule PurchasablesController do
  use Phoenix.Web, :controller

  @business_rule Application.get_env(:my_shop_app, :purchasable_creation_rule)

  def create(conn, params) do
    purchasable_creation_request = %PurchasableCreationRequest{
                                        name: params.name,
                                        approved: false,
                                        created_at: Timex.now()}
    case @business_rule.create(purchasable_creation_request, PurchasablesGateway) do
      {:ok, purchasable} ->
        conn
        |> put_flash(:info, "Success")
        |> assign(purchasable, purchasable)
        |> render(:show)
      {:error, reason} ->
        conn
        |> put_flash(:error, "Creation failed because #{reason}")
        |> render(:create)
    end
  end
end

I think it’s dangerous because the tests often can’t run safely and concurrently, I tend to define all tests as async: true by default. It’s muscular memory at this point. Am I willing to trade test execution speed / time to feedback for this? No. I found using the Mox library, a good, safe and fast fix though.

I wonder if we could make these dependencies more explicit though. Whether with the router or with application config, the dependencies are rather implicit, with Application being the worse.

Let’s look back at our last version of the example (no matter which approach between the router or app config, you pick). I noticed that the business rule is now polymorphic: I can easily pick a different one. Yet the controller is still making a decision about which concrete gateway the business rule is using. I don’t think that’s the job of the controller and so the code should look like this:

In the controller:

defmodule PurchasablesController do
  use Phoenix.Web, :controller

  @business_rule Application.get_env(:my_shop_app, :purchasable_creation_rule)

  def create(conn, params) do
    purchasable_creation_request = %PurchasableCreationRequest{
                                        name: params.name,
                                        approved: false,
                                        created_at: Timex.now()}
    # Removed the gateway in the next line
    case @business_rule.create(purchasable_creation_request) do
      {:ok, purchasable} ->
        conn
        |> put_flash(:info, "Success")
        |> assign(purchasable, purchasable)
        |> render(:show)
      {:error, reason} ->
        conn
        |> put_flash(:error, "Creation failed because #{reason}")
        |> render(:create)
    end
  end
end

We’ve just decoupled things a bit more. Our business rule still needs a gateway though. Sure, we could once again rely on Application config to make the business rule fetch its gateway at compile time into a new module attribute. Things are getting more and more implicit now, it’s spreading quickly. Perhaps we could combine this solution with the router+option solution:

post "/purchasables", PurchasablesController, :create,
  private: %{
    business_rule: BusinessRules.Purchasables,
    gateway: PurchasablesGateway
  }

and revert the change made to the controller to

defmodule PurchasablesController do
  use Phoenix.Web, :controller

  @business_rule Application.get_env(:my_shop_app, :purchasable_creation_rule)

  def create(conn, params) do
    purchasable_creation_request = %PurchasableCreationRequest{
                                        name: params.name,
                                        approved: false,
                                        created_at: Timex.now()}
    case @business_rule.create(purchasable_creation_request, conn.private.gateway) do
      {:ok, purchasable} ->
        conn
        |> put_flash(:info, "Success")
        |> assign(purchasable, purchasable)
        |> render(:show)
      {:error, reason} ->
        conn
        |> put_flash(:error, "Creation failed because #{reason}")
        |> render(:create)
    end
  end
end

Well, now we have combined 2 different solution so it’s more complex. Worse, the controller knows way too much about lots of little details to make things work, it is nosy.

What if we look at the other solution using the router options only? A solution is to define a module which wraps the actual business rule and decide what gateway should be used. That’s like decorators in the OOP world.

#Can this scale into the composition root described by ploeh in the blog shared above?
defmodule PurchasableCreationRuleWithBatteriesIncluded do
  def create(purchasable_creation_request) do
    BusinessRules.Purchasables.create(purchasable_creation_request, PurchasabesGateway)
  end
end

In the router:

post "/purchasables", PurchasablesController, :create,
  private: %{
    business_rule: PurchasableCreationRuleWithBatteriesIncluded
  }

The controller:

defmodule PurchasablesController do
  use Phoenix.Web, :controller

  def create(conn, params) do
    purchasable_creation_request = %PurchasableCreationRequest{
                                        name: params.name,
                                        approved: false,
                                        created_at: Timex.now()}
    # the concrete business rule is set in the router and has the desired gateway baked in
    case conn.private.business_rule.purchasable_creation_request) do
      {:ok, purchasable} ->
        conn
        |> put_flash(:info, "Success")
        |> assign(purchasable, purchasable)
        |> render(:show)
      {:error, reason} ->
        conn
        |> put_flash(:error, "Creation failed because #{reason}")
        |> render(:create)
    end
  end
end

That is a solution I found while writing this up. Disclaimer: I haven’t actually tried it yet, but I still don’t feel very excited about it. It all feels like doing functional programming in older versions of Java: instead of composing functions, we are finding workarounds to compose modules, and that’s not … elegant.

Thoughts? Opinions?

2 Likes

I’m not sure how a function would compose differently than a module. To me your post kinda sums up to the following:

My business logic needs X, but I don’t want the controller to know which X to use, but I also don’t really know who should be in charge of knowing the X to use.

Replace X with gateway/module/function/business_rule/whatever, the problem will stay the same: Someone needs to decide on a concrete implementation. This is usually where dependency injection comes into play as you seem to have already gathered.

The application environment / module attributes are simple ways to get started with dependency injection, but as you also already noted this makes testing harder, because the dependency is globally or even compile time set. The only way around that is explicitly passing data in. If you test a function you’d pass in e.g. a keyword list of dependencies. In case of http requests it might be headers, or you recycle the conn in a test and prime it with some assigns. Plugs can then use those settings and convert them to actual modules/function or whatever you actually need in your controllers. If you don’t explicitly pass data in you’ll need to rely on side effects to query for those dependencies, which likely means global state again.

If you don’t need that much flexibility up to the http request level I’d also suggest simply using plugs. They can select your dependencies (module plugs at compile- and runtime) and put them in the conn for the controllers to use. You can decide where to call those plug (for all requests, just a single scope or even just a single controller and the logic does not leak into e.g. the router itself.

From a quick glance at the blogpost you linked I also feel like plugs are the answer to your question about composition roots.

It seems to me that most people are eager to compose as early as possible, but the correct answer is:

As close as possible to the application’s entry point.

In case of phoenix this is basically MyApp.Endpoint and you can use plugs there to resolve dependencies based on the request coming in.

2 Likes

There’s a critically important piece missing from this discussion: why.

Your post does a good job of outlining the costs of different approaches to introducing decoupling and runtime flexibility. But without a reason WHY, it’s impossible to know which one is “better”.

2 Likes

@LostKobrakai Thanks for your answer.

Functions compose naturally: f() o g() works if the output of g is an acceptable input for f. With modules, you can’t compose M1 with M2 (at least in Elixir), it doesn’t return a new module or anything else that would make sense.

I believe that the application’s entrypoint should make all the plumbing/wiring decisions (what X to use in a Y and what Y to use in a Z). In some languages, it’s the “main” file. In Elixir, that would be application.ex.

Indeed.

Indeed. I actually use this solution in production now. It works. Unfortunately, it also means that the composition has request scope : everything get recomposed on every request, by every request (well by the plug chain for that request). So, yes it works, but I don’t think it’s the responsibility of plugs to do that because we are now mixing plumbing with processing. In my experience so far, it complicates the router, the controllers, …

We disagree, I think the application’s entrypoint is application.ex (“Phoenix is not your application.”). Yet, I don’t see how to compose in application.ex:

  • a usecase with a gateway,
  • a controller with that usecase
  • a router with that controller
  • an endpoint with that router

[EDIT] There is another thought frequently coming back to me about this: should I be using processes somehow somewhere for that? And another one is that perhaps the way I want to build things is fundamentally not aligned with how Phoenix/Plugs are designed. On that note, I want to have a look at Ace/Raxx by @Crowdhailer. Perhaps he found a solution for me there.

@al2o3cr Thank you for your answer.

Indeed I made an assumption (I’m an engineer, I do that for a living) that the reader would have a similar understanding and appreciation of some architectural principles to mines.

I believe in dependency inversion. I think the low level details should depend on the high level policies. That’s why the controller depends on the usecase in my example. Yet, by hiding information from the controller (which usecase? what does the usecase do? how does it its job and with what dependency?) we also gain a degree of flexibility which manifest immediately and positively in my tests, and in the longer term in changes to the codebase which are quite contained to a few modules (ideally one).

If you want application.ex to be your entry point then you’re limiting yourself to “per started application” composition, which goes against your problem of wanting to run async tests. An application does only once in its lifetime receive “inputs”, so again every additional changes would need to rely on side effects.

If you want async test you need to be able to change plumbing just for the codepath of that single test execution, which for phoenix controller tests means “per request” composition. This requires stuff to be handled somewhere in the plug pipeline. You can see this implemented in action in this blogpost: How to Add Concurrent, Transactional End-To-End Tests in a Phoenix-Powered Ember App - DockYard

Also I’m not saying you need to handle everything in your plug pipeline. You can surely mix and match depending on how much flexibility you really need. Your response sounds like you want plumbing to be static to an application, yet flexible to be tested in isolation. Maybe you should just acknowledge the different levels of granularity and build a multi layered system, where per request plumbing is only done for tests, while in prod the plug simply takes a static set of plumbing from e.g. MyApp.Application.plumbing and put’s it in the current conn. Controllers simply use what’s available to then through the data on the conn.

Processes won’t make your problems simpler in any way. If you use processes to store details on plumbing you still didn’t solve the problem of knowing which process (which set of plumbing) to use for a certain request. If you cannot do that you cannot do async tests with controllers. You might want to look at how e.g. the ecto sandbox works, which essentially deals with the same problem: How does it know which transaction to run a certain db interaction in, so it’s the same as the test starting the current code path.

Basically the answer is global state or explicit “ownership”.

3 Likes

And another though. In your entry post you used a module to compose things together. Modules are inherently compile time constructs (unless you really want to compile things at runtime). You’re also asking about dynamic composition (async tests), which to me sounds like you might want to try to encode your compositional facts into data not code. Because data can be changed at runtime, modules can’t.

Something like:

rule_plumbing = %{
  purchasable_creation_rule_with_batteries_included: %{
    business_rule: BusinessRules.Purchasables,
    gateway: PurchasabesGateway
  }
}

# In e.g. controller
ruleset = conn.private.rule_plumbing.purchasable_creation_rule_with_batteries_included
case ruleset.business_rule.create(purchasable_creation_request, ruleset.gateway) do
   …

Have you checked how Ace works?

I think my initial post documented this approach, search for “Yet the controller is still making a decision about which concrete gateway the business rule is using.”

I’m not sure where this would be the case. rule_plumbing can be defined wherever you seem fit. The controller has no idea which exact gateway it’s using. It just uses the one, which was put into the rule_plumbing data. The rule_plumbing being put into conn.private is just an example and a simple way to supply a controller with information, because all the data it receives (without global state) is the conn. Even the params in a usual def action(conn, params) come from a call like apply(controller, action, [conn, conn.params]). So you’ll hardly get around putting some information into the conn if you want to async controller tests with dependency injection.

It’s been some time, but I’m not sure how you expect it to work differently in regards to your problem. When a http request is handled the process, which is handling it, needs a way to resolve the abstract dependencies in the code into actual dependencies. It can either resolve them with knowledge available to the process already – basically all the information of the http request, or maybe the process dict, when we’re talking only about tests –, resolve it using information available to another process known ahead of time or non process based global or compiled state – e.g. hardcoded functions or modules being somewhere in your codebase. If this process handling a http request is spawned by phoenix or ace shouldn’t really make a difference in how those processes resolve their dependencies.

1 Like

@LostKobrakai you’re right in your first paragraph, I spoke too soon.

I’m curious about Ace because it seems that controllers are processes which you can initialize when they start with any value. That means I could effectively start them in application.ex and configure them there. Regarding tests, since they just are implementation of a behaviour, like gen_servers, they can be tested safe and fast.

There’s Phoenix.Endpoint.init/1, which is called when your endpoint starts up. Not per controller, but still.

Controllers in phoenix also implement a interface (even if not explicitly via a elixir behaviour). They’re plugs. MyController.call(conn, action) and you don’t need to start any infrastructure around it.

Yes but testing them without the endpoint and router is a real pain. First it’s going to complain about the session, then the flash, then something else, bwah.

I agree with this sooo much. I think the problem lies exactly in the fact that we can’t inject state into the Plug connections from the regular supervision structure - for any other processes we have start_link and the init callback or other mechanisms, where we can set-up the state for the process - including dependencies, but we entirely lack that possibility with plug. For some reason Plug tries to enforce completely stateless processes.

3 Likes

Thats what bypass_through(conn, MyAppWeb.Router, [:browser]) is meant to be used for.

And it seems Ace/Raxx got that one right @michalmuskala

1 Like

Is it? I never understood the docs for that one.

I somehow forgot Phoenix.Endpoint.init/1 was only for configuration, but can’t a plug at the start of the endpoint do something similar.

defmodule MyApp.Application do
  def start(_, _) do
    children = [
      {MyApp.AceController, config()}
    ]
    …
  end

  def config() do
    …
  end
end

vs.

defmodule MyApp.Application do
  def start(_, _) do
    children = [
      MyApp.Endpoint
    ]
    …
  end

  def config() do
    …
  end
end

defmodule MyApp.Endpoint do
  plug :prime_with_config

  def prime_with_config(conn, _), 
    do: put_in(conn.private.my_app_config, MyApp.Application.config())
end

In both options the context of execution does get the config. I’m sure the first option would be cleaner, though.

It basically takes a conn, applies all the plugs of the endpoint and any router pipes if any and returns the updated conn. Which should solve the issue of sessions not being available and such.

I am not going to discuss the overall architecture ideas here but I would like to talk about the “Modules don’t compose well” bit, in particular this one:

Modules in Elixir provide the same expressive power as functions. From the perspective of functional programming, they are the same, modeled by the same principles, and the same power for composition. Maybe one is more verbose than the other, and that may be what “compose well” means in this context, but they are not different in the ability to compose.

If you have two functions, how can you compose them? We can write the compose function like this:

defmodule Foo do
  def add_2(x), do: x + 2
  def mult_2(x), do: x * 2
  def compose(fun1, fun2), do: fn x -> fun2.(fun1.(x)) end
end

And now:

iex> Foo.compose(&Foo.add_2/1, &Foo.mult_2/1).(13)
30

How can we compose modules? In similar way. Let’s assume each module has a contract, it has to implement call:

defmodule Add2 do
  def call(x), do: x + 2
end

defmodule Mult2 do
  def call(x), do: x * 2
end

defmodule Compose do
  def compose(mod1, mod2) do
    name = Module.concat(mod1, mod2)

    defmodule name do
      @mod1 mod1
      @mod2 mod2

      def call(x) do
        @mod2.call(@mod1.call(x))
      end
    end

    name
  end
end

And now:

iex> Compose.compose(Add2, Mult2).call(13)
30

The difference between the approaches is in their cost: defining the modules are much more expensive. But this is a runtime property. Plus, it can also be made cheaper by having each “module call” be represented by a tuple {Mod, arg}, and then you can compose without defining modules dynamically. Namely, the benefit of functions is that you can close over the current environment (closure) and by using a tuple you can emulate the same with modules.

in fact, one can say the advantage of a module is that you can define multiple functions and compose over multiple functions at once, but this is also possible with anonymous functions. You just need to add a new argument to the function signifying the operation you want to do. From a functional programming perspective, there isn’t much difference between those two:

defmodule Calculator do
  def add(a, b), do: a + b
  def mult(a, b), do: a * b
end

calculator = fn
  :add, a, b -> a + b
  :mult, a, b -> a * b
end

Protocols also provide the same sort of composition as anonymous functions, with the benefit they are open and can be defined/implemented at any time. In fact, a protocol may be a nice solution for your problem.

Anyway, the reason why I am saying this is not to say “you are wrong”, but maybe it can help you put a finger on what you don’t like about the current code.

I am not sure I follow the point here. Plug pipelines can be defined completely dynamically. For example:

  [
    {Plug1, opts},
    {Plug2, opts},
    {Plug3, opts},
  ]

And then to execute it:

plugs
|> Enum.reduce_while(conn, fn {plug, init}, conn ->
  case plug.call(conn, plug.init(init)) do
    %{halted: true} = conn -> {:halt, conn}
    %{} = conn -> {:cont, conn}
  end
end
|> elem(1)

This can also be meta-programmed and compiled into a module when the app starts for performance. But in a nutshell, you can always move the Plug to the runtime. However, I would avoid doing this, because understanding what my pipeline is actually doing becomes much harder. I would indeed prefer to keep them static and read the state from elsewhere.

And this is somewhat my concerns with the proposed code in the thread: the code may be less coupled but we lose a lot in clarity. This could be beneficial if the controller ultimately become a dumb layer (imagine that multiple gateways and business rules are served by the same controller) but I personally wouldn’t do it for individual cases.

This is not something Ace solves either because if you want to make decisions based on the user, which you would most likely do, by the time you know exactly which user you have, you are already too deep into the request life-cycle (and therefore inside both Ace and Plug requests). Futhermore, both Ace and Plug are module based contracts and both allow you pass arguments when building the supervision tree.

16 Likes

I don’t mean plugs at all in here. I’m talking about processes - the process that is started for the request happens somewhere completely in the internals of the library and you have no access to that. The regular way of setting up state for services through start_link and init is not available.

What I’d like to see would be to have something like:

# in application.ex
children = [..., {MyApp.Endpoint, some_initial_options}, ...]

# in MyApp.Endpoint
def init_conn(conn, those_options) do
  # set up initial state for the Conn struct that is later used by this request
  # basically - the same as init for a gen_server
end

This would allow handling plug request processes as any other processes in the system - right now they are somewhat special and separated from the supervision tree in that there’s no way to pass state into them. The whole state has to be either provided statically (in some module somewhere) or through mutable memory (an ets table or application env).

6 Likes