How about "DTOs" on controllers? (Data Transfer Objects)

In our app, we are trying to apply a hexagonal architecture. We have the concept of internal schemas, generally ecto embedded schemas or schemaless changesets that perform some validation or data transformation. Its easier to transport internal data with this logic.

The workflow:

  • App receives an incoming message (API, Kafka)
  • Controller uses an Adapter to parse an internal schema
  • Controller passes this schema to the context
  • Render response

However, we have some problems:

  • We have several external providers (Kafka, other APIs, frontend) that can have different fields.
  • We would need a way to create different versions of a schema, for an API
  • We would like to move this validation with schemas more to the edge of our web layer

The idea is somewhat the same as a DTO (Data Transfer Object), but it would be only flat maps only to do validation and maybe reject the invalid incoming message.

Does anyone make something similar to this? Does this make sense at all?

2 Likes

Your question is somewhat unclear. Providing more context, and especially coding / implementation woes would help understanding the actual challenge.

Sure, but don’t your internal schemas need to have a unified format? If not, then you just have N versions of a similar logic – kinda verbose but nothing unheard of.

Meaning? In Javascript? If not, then nothing stops you from just having Ecto Changesets do all the work just before rendering a response – which, granted, will require slight change of how you do things but if you are implying that this is a lot of work then I admit I can’t see it.

Basically every other project I ever participated in. Extremely normal.

1 Like

Ok, so let’s take this model, that is what is persisted in the database

defmodule Entity do
  use Ecto.Schema

  schema "entity" do
    # ...
  end

  def changeset() # ...
end

We could receive data for this entity from two different providers, so let’s create an adapter to convert from params to internal data and an internal schema to make only calculations and data transformations (this can be skipped if no complex transformation flow is required):

defmodule EntityAdapter do
  def params_to_internal_schema() # ...
end
defmodule InternalEntity do
  # do calculations and data transformation
end

So we achieve the application edge, where data is inputted, now we create a “schema” only for validation, maybe with schemaless changesets for each provider:

def ProviderASchema do
  @types %{field_a: :string}

  def changeset(params) do
    fields = Map.keys(@types)

    {%{}, @types}
    |> cast(params, fields)
    |> validate_required(fields)
    |> apply_action(:parse)
  end
end

And the second one:

def ProviderBSchema do
  @types %{field_b: :string, field_c: :integer}

  def changeset(params) do
    fields = Map.keys(@types)

    {%{}, @types}
    |> cast(params, fields)
    |> validate_required(fields)
    |> apply_action(:parse)
  end
end

On the controller it would look like this:

defmodule ProviderAController do
  action_fallback FallbackController

   # if it fails, fallback_controller should handle
  @enforce_schema ProviderASchema
  # params should be the schema already validated
  def create(conn, params) do
    # adapter flow can be skipped if params can be just persisted
    with {:ok, internal_schema} <- Adapter.params_to_internal_schema(params),
         {:ok, entity} <- Context.create_entity(internal_schema) do
      render(conn, "success.json", entity: entity)
    end
  end
end

And then create a ProviderBController following same logic.

Somehow is similar to rails string params? :thinking:
It come more clear?

Your structure is not conventional, and it looks You are trying to do Ruby in Elixir, but Elixir is not Ruby.

Changesets are the usual way to deal with DTO, and You might have many in the same schema. For example one for kafka, one for another provider, etc. Each one with custom validation, and casting rules.

Is there a reason to have business logic in the controller?

3 Likes

yeah, I feel this structure is far-fetched and kinda verbose while business logic is spread in all the app. This structure was the main suggestion
I was trying to search more about this case (actually I don’t even know how to name this problem) and how it can be solved in functional programming languages.
I’ll try to focus on changesets.

What do you think about this article?

Could you suggest to me any books or articles about how to understand better different ways of arthicteture in the functional land? Actually, I see many go with event-sourcing and/or CQRS and I would like to study more about it

You might find this highly related :slight_smile:

and

I prefer CQRS/ES because I think it fits well with signal and processes.

This book, although not in Elixir, but in F#, is in my recommendation list

4 Likes

You have way too many modules. Basically when you have 2 or more shapes of data coming in that you want to parse and validate into your own internal data structure then you just have 2 or more *_changeset functions in your Ecto.Schema module. That’s where I’d start.

And I am with @kokolegorille here: you’re trying to do big Ruby on Rails app (or Java “enterprise” app) in Elixir which will only serve to make it harder to evolve and change the app.

Start with dropping some weight off of the app IMO.

3 Likes

The biggest takeaway I’ve personally found implementing these sort of patterns in Elixir (or probably in any functional language), is the concept of “functional core, imperative shell”. I would go on to say that it is the overriding principle in your classic hexagonal architecture, CQRS included (especially in Commanded aggregates).

“Functional core, imperative shell” as a concept does not require as much ceremony as one would think. Certainly nothing on the level of classic OOP languages, although there are a couple of discussion points to have had.

One such argument is the value of persistence and transactionality in your core application. Storage is such a key component to a vast majority of applications that they are intrinsically link to the core. Proponents point out that this is a serialisation process, an imperative shell that needs to be completely isolated from the core system. In the book Designing Elixir Systems with OTP: Write Highly Scalable, Self-Healing Software with Layers by James Edward Gray, II and Bruce A. Tate, it pushes it to a different layer as implied in the table of contents:

  • Assemble Your Components
    • Add Persistence as a Boundary Service

In this case, your core systems have no idea about database structures and this requires an encoding/decoding layer that you’ll have to write. While somewhat tedious, it is “technically” correct and if there’s one word to summarise the process of implementing application with the “functional core, imperative shell”, tedious could be a word one would use.

Maybe you don’t really need all this and you place transactionality as part of the domain itself. This certainly makes the rules simpler. In that case, you can model your code in a “light” style of functional core that contexts give you. An excellent primer to get your head in this zone are a series of articles by Sasa Juric:

Just don’t use it as the “source of truth” for all things related to code structure and standards. It’s a set of very well explained decisions, but it would be wrong to conflate it as the “canonical way” to build Elixir applications.

4 Likes

You might think of your data first. And functions to transform them. Functions are first class citizen.

input → function → output

If You can remember input type, and output type of your functions, You can build pipelines of composable functions.

Then, You might think of where You want to add concurrency to your application, because You are on the BEAM.

This is not mandatory if You use Phoenix, because it’s done for You.

In Your case…

%AnyStruct{}
|> AnyStruct.changeset(dto)
|> Repo.update (or insert)

-> {:ok, %AnyStruct{}} | {:error, changeset}
2 Likes