DDD: How far should I go to make code domain-expert-friendly

Hello everyone,
I’m currently reading the book “Domain Driven Design: Tackling Complexity in the heart of software” by Eric Evans and developing a toy project to begin to understand what’s all about (and implement hexagonal architecture too).

Somewhere in the book, Eric stated that domain expert should be able to read and understand the code. In my understanding this means that domain-related code should be as less tech related as possible. But how far should I go this path.

For example, I’ve got to register a user with a first name and last name. I decided to chose Ecto for everything related to validation, then I’va got a User module which is like this

  defmodule Account.User do
  use Ecto.Schema
  import Ecto.Changeset

  schema "users" do
    field(:uuid, Ecto.UUID)
    field(:first_name, :string)
    field(:last_name, :string)

  def changeset(user, attrs) do
    |> cast(attrs, [:first_name, :last_name])
    |> validate_required([:first_name, :last_name])

  def register_changeset(user, attrs) do
    changeset(user, attrs)
    |> put_change(:uuid, Ecto.UUID.generate())

This should be OK as it is some kind of “deep implementation”, but for the API, which should be understandable by domain expert, what is the best?
My first (naive and classic) approach was this one:

defmodule Account do

 alias Account.User
 import Ecto.Changeset

 def register_user(data) do
   |> User.register_changeset(data)
   |> apply_changes()
   |> DataLayer.insert()

which I think is too tech-related. Therefore, I tried to be more user-friendly and explicite and end up with something like this:

defmodule Account do
  alias Account.User

  def register_user(data) do
    |> User.validate_register()
    |> User.generate_uuid()
    |> User.save()

which is easily understandable and expose all steps of the data processing but raise the following question:

  • Does this mean that I need to have a domain-expert-friendly layer and a more technical layer (which will effectively does the work)?

And additionally, i wonder if Phoenix contexts are just the right balance between too much coupling and too “non-tech” code.

What do you think? Can you help me clear my mind?
I understand that this matter is important but I can’t yet express it in code…
Thant you

1 Like

Yes/No. It depends on how complex your application becomes. As a wise person (I unfortunately do not remember who) once said: “Adding a layer of indirection solves all problems… but one [the problem of having too many layers of indirection]”.

I definitely think that if your application is somewhat larger, that your second approach is better (where ‘better’ means more readable and more maintainable). However, adding an expert layer in-between does mean your code becomes harder to change (which might be a good or a bad thing): Be sure that your modules are set up in such a way, that every module has only a single reason to change (the single responsibility principle).

In this case, the Account.UserImplementation module should only change if the technical details (like the database you use, or what it means, internally, for to be a valid user), whereas the Account.User (the expert layer module) should only change once the expert’s desired functionality changes.

I still think you are somewhat mixing the two responsibilities (keep in mind that a lot of this is personal opinion or ‘gut feeling’, so take this advice at face value, not as ‘absolute truth’):

  • generate_uuid() is probably not something an expert understands.
  • validate_register() is a weird name. What about validate_when_registering() or User.validate_registration()?
  • The expert layer is given the impression that the given pipeline can never fail (e.g. ‘saving will always happen’). Deciding what should happen on failure is often something that should be done on the expert layer rather than in the technical layer.
  • data is one of the worst names to give to a parameter, because it is a name that is used for so many different things. What about e.g. form_data?
  • Saving is not something the User datatype should worry about. (This is why Ecto made the difference between Schemas and Repos after all).

So what about (it’s a toy example, of course);

defmodule Account do
  alias Account.User

  @spec register_user(%{}) :: {:ok, User.t} | {:error, Ecto.Changeset.t}
  def try_register_user(form_data) do
    |> User.new()
    |> User.fill_registration_details(form_data)
    |> Repo.try_save()

(I won’t write out the internal Account.User module here; suffice it to say that User.new fills in server-generated details such as the UUID).

In this simple example, Phoenix contexts are enough to separate technical details from non-tech code (Phoenix Contexts are, after all, modeled after DDD). If things become more complex, you can always introduce another layer of indirection between the expert layer and the lowest technical implementation level.


Related to this, A talk I really like that talks about getting the matter of layers of abstractions ‘just right’, is Writing Quality Code in Erlang.

Yes, the code he talks about is an Erlang example rather than an Elixir one, but it still should be easy to follow along with his train of thought.

I think at the core of this statement was the universal use of “ubiquitous language” for coding artefacts like files, classes, methods, functions, variables, identifiers. If you don’t already have that “glossary of domain terms” set up before you write the code it’s going to be difficult for the domain expert to have something to relate to in your code.

Is Layering Harmful? (1992; pdf)


I’ve never formally done DDD, but I do recall a couple of times, when I contracted at a small insurance company in the Netherlands, I got to pair program with a domain expert (an actuary, so someone reasonably technical I’d say). We were working in Smalltalk and the codebase was quite well written, which in Smalltalk usually means you end up coding in something very close to a DSL. It was a joy, we got shit done in minutes which would have taken weeks through the “official” routes, and I’ve always kept it in my bag of tricks. But I’ve noticed this direct interaction with “users” is rare, for better or for worse, so basically my level of “lift up the language to the domain level, then solve your problem there” (nothing new, a Lisp adage from the '70s I think) is “can I directly read what’s happening and translate that into a user’s language”. To me, keeping related stuff together (a “flow” with all its steps and validations) is more important there than having it written so that I can print it out and hand it off (in Ecto terms, at least keep the whole changeset business and its use pretty close together, or make sure they get called in a very obvious way; i’m fine having bits of Ecto shine through, I’ll make that translation, but I’m not fine having to dig around for snippets of code that make up “user registration”).

(incidentally, that’s why I always loved Seaside, because that framework allows you to do just that, in a very natural way; I have hopes that LiveView might go that way too).

1 Like

Yes, I believe Phoenix is at a balanced point. It have some concepts similar to DDD in the meantime keep the code compact enough (As opposed to tens of microservices with kubernetes…).

In terms of DDD, there are core domains and supporting domains. Simply put, you could only apply DDD to those bounded contexts which domain is more valuable, more complex and keep evolving. Otherwise, like simple stuff, you can make it more technical and compact.

A straightforward way of modeling in DDD is event storming. It’s more about finding domain event, for describing what happens, in a more domain experts perspective like user_registered(name, credential...). But to mess around events is more an architectural change, mostly you can just describe by commands instead of events, usually like an event but also can simple as a function call, in your case function register_user is good enough.

The key is to think more about public API like register_user, make it feels more like an independent product instead of a part of the implementation, and other contexts or web can only access from the API. And the API is all the business use cases (events or command) related to this context. The underlying implementations are free to fill in and could be independently swapped.

1 Like

Thank you for all your answers.
The strange thing is that I understand a little bit better where I’m supposed to go and at the same time, I haven’t move at all!

What I understand:

  • Know you domain: when you can talk about specifically defined parts with other, then naming, splitting, etc. will be almost already done
  • Know your project: Don’t over engineer a simple project just to be supple /expert-friendly. Refactoring is a mandatory step anyway (but keep in mind some kind of separation of concerns…)
  • Know your tools: and integrate that knowledge in domain modeling. Example: Ii know that I will use RDMS and graph database, modeling with the first one is completely different than the last one (and is way simpler to talk about the latter with non-tech).
  • Know yourself: Business Experience / Technical knowledge can’t be forgotten in the process of modeling

The best for me will be to got the path I wanted to avoid because I find it time-consuming: Agile.
Writing user stories is close to writing scenarii in the domain expert point of view and can easily lead to make the famous Ubiquitous Language emerge and provide good basis for separation of concern and process definition.
That’s a shame I don’t have any expert working with me on the current project as I’ve experienced the ease of developing a good (and clean) product when a expert is available and involved (and at that time, I wasn’t aware about DDD but in fact use some similar concepts without knowing their names).

Now, time to work (a lot)!

1 Like