What's your process on splitting application logic?

Hello everyone!

There’s a common concern/ discussion I have with my peers and I’d like to collect insights from the community regarding this topic which is: how are you splitting the code between your contexts and schemas?

I’m a little bit more familiarized with DDD applications from my previous experiences as a software architect while the majority of my peers seem to come from a “classic” Rails MVC background. Because of this, I’m facing some challenges trying to articulate properly some architectural concerns regarding our application domain that are not usually implicit to them.

I acknowledge that DDD practices sound sometimes too conservative for people accustomed to a less structured way of building applications. Besides that, it goes without saying that the whole decision-making process becomes a lot harder than it should be because there’s too much convincing going on most of the time (IMHO, this wouldn’t happen if there were more overlapping knowledge in this specific area).

I framed some questions that might help us get this sorted out:

  • Are you using schema’s functions to return only Ecto changesets?
  • Where are you currently placing your domain/ model/ entity logic?
  • Do your schemas deal with “domain/model” concerns or not?
  • Are your contexts the only public interface between outside code and your schema?

As a side note: I’m looking for some objective and concrete techniques or concepts you’ve been applying to identify and organize your application logic with Elixir. I say this because I think this information would resonate better with more pragmatic people.

I think you got the gist. Please, feel free to elaborate further if necessary.

My best regards to everyone.


I’m not really qualified to give advice here but I’ve been keeping an eye on this question for the past couple of hours and thought I’d share what I’ve grokked based on a lot of reading (including answers to such questions in this forum). I’m not sure where you are at with functional programming, but I do deal with similar things at work where we attempt to do DDD while still staying as true to The Rails Way as possible (and, of course, this doesn’t work out perfectly… you can only get so far banning belongs_tos!).

I would HIGHLY recommend reading and working through Functional Web Development in Elixir, OTP, and Phoenix (https://pragprog.com/titles/lhelph/functional-web-development-with-elixir-otp-and-phoenix/). The biggest takeaway from that book for me is how to separate your “functional core”. This keeps all your domain logic completely independent. In the book it’s actually developed as its own “Application” (ie, library), then added into Phoenix as a dependency. With this in mind I would say: no, domain/model logic does not belong in schemas. I believe contexts should be the public interface between your business logic and your schemas (state/datastore).

As for your context question, as someone who has a decent grasp on DDD and read about and played with Phoenix contexts in a personal project I would yes, contexts should absolutely be the only public interface between schemas. I have no production experience with this, though, so I probably shouldn’t weigh in here.

If you do read the book I recommended, be aware that it’s a little out of date. You’ll have to adapt one example when it comes to using simple_one_for_one supervisors, and then the whole JS part of the book still uses Brunch instead of Webpack.

I have a few more thoughts but again, I’m not super experienced so I’ll leave at it this for now and hopefully others weigh in and can correct or back up anything I’ve said!

1 Like

By all means, feel free to get involved. It’s really good to see how people do things differently so we can draw our own conclusions based on that. Given that we are all in different places of experience I definitely weigh that any advice is welcome.

I’ve been doing Elixir and Phoenix professionally only for the past year. Even though I already have some strong opinions formed based on my own research and of course previous experience, I try to keep my mind as open as possible to new information at all times.

I’ll definitely check out your recommendations. Thanks for the reply!

Update.: It seems that I had a problem with my previous comment that it’s now deleted. I couldn’t undelete it, so I’m reposting here :sweat:

1 Like

I have been trying ddd after reading

It’s probably possible to push design toward full compliance, but that would be a lot of work.

DTO to Domain Object is different, as Ecto uses changeset.

Bounded Context should own there own datastore, although it’s possible to multiple Repo, it’s not the usual way to build a Phoenix application.

I have tried a light ddd approach, but with Elixir specificity, where Data Taking (and all Phoenix contexts) is responsible for validating data, and then, in each Bounded Context, I define schemas.

It can be very different from the original data. It can use specific changeset, to update the underlying database, or be read-only.

One example…

defmodule MyApp.Scheduling.Medium do
  @primary_key false
  schema "events" do
    belongs_to :language, Language

    # id -> permalink!
    field :id, :string,
      source: :permalink,
      primary_key: true
    timestamps(type: :utc_datetime)

This is a medium in a bounded context, but it is an event in the data taking context. It uses a different id than the original data.

It is a lot of duplication, I define specific schemas in each bounded contexts, but I know they are valid Domain Object, and can reshape and rename…

In each BC I have an event listener (a genserver), and use a domain event dispatcher (pubsub), to be reactive. I use LiveView in some part, and React in other. Bounded Contexts and UI are reacting to domain events. But this part is easy with Elixir :slight_smile:

Schemas are supposed to hold changesets only, business logic needs to go into contexts, which are your public API that interfaces with web controllers, for example.

Considering you have a Blog Phoenix app, a simple way to “transfer” from a Post Model from Rails would be:

  • defmodule Blog.Posts - this is where the business model lies, including Repo calls
  • defmodule Blog.Post - your schema, with changesets only
  • defmodule Blog.PostQuery - composable Ecto queries used by Blog.Posts

You could then have a method inside the top-level domain Blog like create_post() that calls Blog.Posts.create(), and your web controllers would access this via Blog.create_post().

It adds a few layers of abstraction and helps you space out what the app really is without losing much productivity, which is what Rails was more focused on by having everything in a single huge conceptual place (model).

If you want a more practical example, I apply this structure on this project: https://github.com/pedromtavares/moba

1 Like

This is a problem that I see quite frequently and it leads to schemas getting loaded with virtual fields there are not exclusive to the domain representation and bleeds out to templates and other places.

There’s a very common pattern to abstract these specific data stores into generic repositories (interfaces) and Ecto.Repo remembers me a lot of this kind of abstraction. I’ve seen people frowning upon this approach but I haven’t encountered a lot of uses cases where the domain requires consuming data from a lot of different stores to justify using it.

@kokolegorille BTW, are you using something like CQRS?

I was naive enough to think that most people would agree with that claim but id doesn’t seem to be the case in my circle. Since this is not a “hard requirement”, I haven’t seen a consensus in the community to pass on to people as guidance. I like and use this approach because it makes sense to me, but how do you enforce this with people that do not see the separation of concerns as you do?

@pedromtavares The “CRUD” part is not where I see most of the problems happening actually. The hairy bits are with domain logic. A very simplistic example would be: If you have a computed property like full_name, in other languages this behavior would be encapsulated by a constructor, property, or method in the domain model - which could be carried over to DTOs (ViewModels/ Presenters, etc).
However, when you rely only on an Ecto.Schema to represent data, you don’t have a way to “encapsulate” this behaviour in the traditional sense. The usual go to solution is to use a virtual field (but then you need to fill the information every time you retrieve the data). If you are not using a virtual field, (could be because the computed data is too complex), you’ll usually want to use a function in one of three places: schema, context or view; and here is where I see the lines getting a little blurry for some folks.

PS.: Besides that, some of the libs out there (that are adaptations of Ruby gems), enforce some “behaviors” in the schema like it were a traditional model…
(As I’m thinking this through I guess the main problem could be conceptual or a misaligned midset of using schemas to represent state instead of just a bag of data).

While there doesn’t seem to be an official way to support this, I’ve seen a Service Layer used in a Phoenix app to encapsulate all logic away from Phoenix specific code like the Views and Controllers. gen_servers and the like were used where necessary, but the majority was just a module that contained the business logic for a specific feature / set of models. You can have constructors of sorts in the modules, too. The downside is that nothing really enforces the design pattern. It has to be known by all the devs, and informally enforced through code reviews and such.

I separate the commands, in the taking context, from the queries, in the presentation layer :slight_smile:

I also like to think in terms of events, this is how I make the application reactive.

@mpope I have the impression that for most applications, splitting too much is more trouble than it’s worth tbh. The way I see Phoenix contexts act like a mix of Application Services and Domain Services which provide just the right amount of abstraction for my taste. In contrast, it relies a lot more on the developer to be clever and think about design.

Since it does not live in the database, it should not be in the Schema. The Schema is simply a translator between what is in the database and Elixir, with no added logic on top.

I would solve this by adding full_name as a function inside the context (I call this the Service layer) and have views reference it like Blog.user_full_name(user), which, to me, is much less intrusive than doing Blog.User.full_name(user) because you’re not exposing your schemas to outside callers.

This is ultimately a matter of taste though, but I find it easier to have a hard-requirement of not allowing business logic in Schemas because what starts as a simple string concatenation for this case can become 3 or 4 line functions with important logic for other cases, fast forward a few months and you have half of your logic living in Schemas and half living on Services, ending up on a huge conceptual mess.

I personally don’t think that changeset functions belong to schema. About a year ago I’ve started consulting an agency, and proposed the team to write changesets as private functions in context modules, and that’s been working really well. In most cases I’ve seen, a changeset function is used in exactly one place, which is not a schema, so making this function public and moving it to another module makes little sense to me, and it complicates the reading experience. Even when it’s used in multiple places, there’s typically a strong cohesion between all callers (they work on the same data), so they are a part of the same context module, and hence changeset as a private function in that same module works fine.

We usually don’t have functions in schemas, but we may add some helpers for computing derived properties. E.g. say that the OrderItem schema has fields quantity and price. If I need a function total_price, I’d define it in the schema.

We mostly stay away from virtual fields. I think there was one case where we used it to store some derived value computed at the database level.

In context modules.

They are our main domain types, since we don’t transfer data from schemas to pure Ecto-independent data structures.

Schemas are returned from contexts to web tier, and they are used by controllers, resolvers, and views to produce the output. Therefore, they are not an internal implementation detail of contexts.

Note that I’m not suggesting this as a universal pattern, but it’s been working well for the client’s projects, which I’d categorize as small-to-moderately complex.


Oh that is nice, it really clean up schemas to the bare minimum.

1 Like

@pedromtavares I don’t find this to be true though. After all, the whole concept of virtual fields is to hold data that is not supposed to be in the database in the first place. But I think I understand the gist of what you meant.

Since you do this on a higher level (service), how are you dealing with cases where you consume data directly from the database, for example, a RESTful API? In the product I’m working right now, to avoid “abusing” contexts too much, we are implementing a serialization protocol that knows how to map/ build/ transform some of the fields to make it available in our schemas.

Thanks for the reply @sasajuric! This is a very interesting approach, but I have to ask: don’t your contexts get too big over time doing this? I don’t think I would be able to use this in my applications because we have more than one changeset to represent multiple states of editing a resource and I don’t think I’d like to have it all cramped inside just one module.

Would you mind posting a few code snippets of this architecture?

From what I understood, you need to assign these extra properties on top of a database schema to serve it as a ‘decorated’ API, so I would tackle this by adding something like a Decorator service module, so you would have say a post = Repo.get!(id) |> PostDecorator.decorate(). You could then have that decorate method add these extra properties.

When a context module grows large, we split it and/or extract some related group of private functions into internal private modules which are only used from within the context layer.


In Rails it was common at a certain point to have projects where a ton of the “glue code” was in the models, but for the most part I didn’t find it to be too much of a problem, compared to something like business logic in controllers which was frighteningly common. I was actually more frustrated when it became “good practice” to keep models lean by adding an /app/services directory and stuffing that with as many arbitrary modules and classes (often with very little apparent reason for the choice) as possible, nested multiple levels deep, with key parts of the application flow buried somewhere within. No conventions, no explicit connections to the rest of the domain, just spaghetti code with a veneer of respectability (in OOP, the mistake of more objects == good architecture). Instead I found it much better to introduce namespaces that grouped models that had a lot of communication, and store glue code there.

That experience makes me leery of principles like schema modules should contain no logic not even changeset functions. But what I like about Phoenix contexts is that they encourage more thoughtful domain design that can make room for this glue code from the beginning, instead of exiling it to some catch all folder or arbitrarily attaching it to some specific “god” model. But at the end of the day I don’t see anything wrong with putting something like a “full_name” constructor in the schema, if it is only concerned with that schema. To me that’s its natural place in the domain, rather than with the glue code (unless it is only used for display purposes in which case I would consider just putting it in the view helper).

My first thought here is that this begs the question–what is the responsibility of a “translator”? Probably not dealing with request query params, sure, but joining two db columns into a single string? That sounds like translation to me. In any case it’s part of the underlying hard problem and as such there are no easy answers.

1 Like

We have a Serializer and a Mapper protocol with this base implementations:

defimpl Serializer, for: List do
  def serialize(data) do
    |> Enum.map(&Mapper.map/1)
    |> Jason.encode!()
defimpl Mapper, for: Any do
  def map(data), do: data

If we need custom conversion logic, we implement a protocol for the target type we want to transform.
This was inspired by something we usually do in C# using AutoMapper to transform Models into DTOs.
The idea here is to eventually evolve this into a decorator or presenter pattern as you mentioned and is one of the things I’m doing in our codebase to help split logic that depends too much on the schema.

This experience you had is something that I can relate to. In the asp.net world, long before DDD was cool this is exactly what happened¹. What you described is how I see Phoenix contexts being abused tbh. In theory, you could make the interpretation that it’s a super layer that exposes everything else to the application.

¹In contrast, nowadays you’ll see a lot of codebases concerned too much with not “coupling” anything to anything else and this produces almost unmaintainable code.
I once worked on a project that for each new feature I had to create like 8 or 10 files (layers) with application services, domain services, infrastructure services, DTOs, mappers, etc.

1 Like

In Programming Ecto there is a chapter called “Optimizing Your Application Design > Separating the Pure from the Impure” which basically says that pure functions (with no side effect) can be placed in schemas while impure (with side effects like database access) can be placed in contexts. This means that changesets, queries and multi’s can go in the schema and the rest in the context.

I really like this idea and I’ve been using it a lot and it works well for me, but after reading Saša’s comment I’m eager to try out putting changesets as private functions in the context for this current project I’m working on.


I haven’t read this book, but I was instinctively drawing this line for my side-projects. However, I wasn’t making this clever distinction of “pure” vs “impure” like you described.
I’ll certainly take a look at it. Thanks for the recommendation!

I highly recommend it.

After thinking about what @sasajuric said, I am definitely going to try out moving the changeset functions out of the schema to the contexts. After looking at Ecto.Changeset.prepare_changes/1 I realized that changeset functions can also become impure. That is not a big deal but one more reason to move them.