Interpreter / composite pattern in Elixir


I am trying to find a consistent way of implementing the Functional Architecture as described in a previous thread Functional Architecture in Elixir.

To summarize the post, that architecture can be understood with the following picture:

Some of you may also recognize this as the Functional core, Imperative shell, from destroyallsoftware:

The way forward

Now, that post identifies some shortcomings of this type of architecture. This post aims to explore a possible solution.

In specific the Interpreter pattern as described in this Ruby talk:

This pattern is built using another common patter, the Composite pattern:

Encapsulation issues

Basically, what this model encourages, is to have a purely functional core. This core never deals with external services, instead, each call to the core returns a tree of actions for the imperative shell to run. Then, this imperative shell runs those commands and invokes the next function on the functional core.

To me, this already has a problem: breaking encapsulation.
Let’s say you have a set of actions in your functional core:

  1. fetch data from DB
  2. do calculations
  3. fetch data from URL
  4. perform evaluation
  5. return result

Normally, under a well designed API, an external user would call P1, and see P5. A service calling our core would invoke P1 and wait for P5.

However, with this pattern, the service calling our core would have to:

  • call P1
  • wait for P1
  • call P2
  • wait for P2
  • call P4 and take P4’s result as final

This means that P2-P4 (implementation details) are now exposed, so the service layer can call them because that’s the only way to continue the computation (Since the core doesn’t actually call anything, the external shell must know what do to next).

This break in encapsulation means a bloated API and problems maintaining in the long term.
Bloated APIs are a recipe for disaster.

Am I missing something?

I watched the full video and I am somewhat familiar with destroyallsoftware, having seen some of their talks as well. But this sacrifice one makes just for the sake of a functional core looks like a time bomb waiting to explode.

  • Am I missing something?
  • Have I misunderstood any concepts?
  • What are your takes on this?

I feel the part you’re missing is that your functional core alone is not your domain API. It can’t be if the domain you’re handling is in any shape or form stateful. What the architecture suggests is separating the pure domain logic (how are discounts applied to prices of an order) from stateful/external dependencies (discount code in the DB).

Your functional core cannot answer the question of “is this discount code valid on the current date”. What it can do is “is this discount code valid given this lists of discount code availabilities and a date of usage”.


A different perspective! I hadn’t thought about it that way!
Would you (or anyone) know of any examples using this architecture? I would be quite interested!

As you’ve mentioned, the “functional core, imperative shell” approach is an equivalent term and may help you in your research. For example:

In addition, reading about Hexagonal Architecture / Ports and Adapters (e.g. may be of interest to you, as well as Domain-Driven Design.

1 Like

The functional core, mutable shell architecture is how I recommend using Commanded, an open source library for event sourcing.

Aggregates and process managers are the building blocks which are used to host the functional core. These are where you write pure functions:

  • Aggregate:
    • f(state, command) => list(events)
    • f(state, event) => state
  • Process Manager:
    • f(state, event) => list(commands)
    • f(state, event) => state

Decisions are always recorded as domain events (as an immutable list of domain-specific facts).

Commanded provides the imperative shell to host the above pure functions in GenServer processes and takes care of all IO, such as fetching and persisting events.

Event handlers are used for side-effects where impure functions can be implemented which react to events and call out to the external world. This could be sending an email, interfacing with a payment gateway, or updating a read-only projection of an aggregate’s events. They can also feed back into the functional core by sending commands to aggregates.

One benefit of the above approach is that aggregates (containing your business logic) can be tested free from any IO concerns as:

  • Given: initial event(s)
  • When: command
  • Expect: resultant event(s)

Hi there. I’m the speaker from the talk you linked :wave:. There’s always more context that’s hard to get across in a 45 min talk :grimacing:

I’ve been meaning to rewrite that code in Elixir as an exercise. I’m cleaning up the Ruby implementation, and am pushing it to as I go (not finished yet, but it’s cleaner than the code in the talk).

So the intent that I’d have with that code is that the imperative shell and the functional core form a unit that is coupled together (where the shell depends on the core, but not vice versa). The shell encapsulates the API. So external callers would talk to the shell, rather than directly to the core in that case. The idea being that the core provides the business logic, but no IO/state/mutation etc at all. Then the shell does only those things, and has no business logic. The shell is essentially providing services that do IO based on a contract that the core specifies it requires.

The idea being then, that the shell can be swapped out. ie to stub IO logic to make testing easier; or if you decide that instead of reading/writing to a local database table, you want to be able to switch that out to a microservice; or instead of JSON, you want to use protocol buffers or whatever.

Other callers could talk to the core, but then they’d need to provide those services, and handle interpreting and executing the commands as well.

But I get your point: It does introduce an inconsistency in that the usual pattern is that the external caller calls the shell, which calls the core. In this case, the shell would have to call another shell function, which would then call the core.

I’d probably look at it in terms of where there isn’t one monolithic “Shell” and one monolithic “Core”, but more as contexts where there are multiple shells, each with it’s own core. Where each major piece of functionality is exposed by a shell that encapsulates that thing. External callers can call shells, shells can call each other, but a functional core can only be called by it’s shell, and all it can do is computations, and return values to it’s caller. The goal of the pattern is the same: namely to have the complex business logic in the cores where it can be reasoned about and be maintained more easily, and more easily tested. Then the shell that talks to the world only does that, and is naive about any business logic at all.

1 Like

Event sourcing is another great way of implementing this pattern too. Instead of returning values and a callback function (ie a free monad), return an event, then an event handler runs to actually execute the side effect.

If you squint hard enough, they’re the same thing, just different (AKA they’re “isomorphic”). The emitted event =~ the returned value, and the event handler =~ the “next” function.

In real world coding, I’ve used event sourcing many times to handle use cases with side effects mixing with pure code like the one we’re talking about here - but not the free monad/interpreter pattern I’m talking about in that talk. Would be interesting to, but for a lot of teams would be too “out there”


There’s some of that pattern in the Enumerable protocol, IMO: