Value Object and Primitive Obsession code-smell

What is your take about the “primitive obsession” code smell in Elixir?

Do you prefer to create structs that wrap the native values or wraps other values like Decimal.t() that are a bit more complex?

Or do you try to use the primitive values as much as you can?

I am particularly interested in those situations where you only have one value in your struct, since


defmodule Something do
  // value is either a native type or some other struct
  @enforce_keys [:value]
  defstruct [:value]
end

// another example

defmodule Amount do
  @enforce_keys [:value]
  defstruct [:value] 

  def new(value) do
    %__MODULE__{value: Decimal.new(value)}
  end
end

Recently coming back to revisit my code, I find myself refactoring a bit and introducing more value objects, that brings a bit more clarity to my code compared to old code.

Some of the value objects (structs) are there to lift up the meaning of the native type or nested structs. While others check some invariants behind the factory function to make sure I don’t have garbage values.

What are your general thoughts about the topic?

You will take a performance, and a readability hit. IMO the only reason to do this if you are have some sort of common strategy that traps the module associated with the struct, e.g. a protocol, or something where you are doing:

def some_function_that_takes_data_interface(value = %module{}) do
  module.operation(value)
end

presumably somewhere, you have some structs that get sent to above function that have more than one map field, in addition to the one-field structs.

1 Like

by readability hit, I mean both in terms of your code, and in terms of when you’re debugging and inspecting stuff, not to mention if somehow your error logs happen to have erlang formatting instead of elixir formatting… Structs are not pretty in erlang formatting. I guess I feel a better strategy to lift up the meaning of primitive data types is to use good variable names. I once had a project where my linter would reject my code i If I had generic variable names like “value” “result”, “x”, etc. I would like to have a static linter some day that can infer types and will require that certain variables have suffixes, e.g. _id must be a uuid and vice versa.

1 Like

In the general case I am completely with you here.

Just definitely don’t obsess. There are many cases where just passing integers and Strings is completely fine.

But I find myself using value objects for another reason as well: libraries like domo / typed_struct can help you with better enforcing contracts at runtime plus give you better error messages in case of a violation.

1 Like

Can you post an example of code that uses this approach? In isolation, it doesn’t seem like this is anything more than a Decimal with extra steps.

They are two main cases thou,

  1. Clarity of the code:
defmodule Amount do
  @moduledoc """
  Value object representing a money amount in the context of fintech.
  """

  @type t :: %__MODULE__{value: Decimal.t()}

  @enforce_keys [:value]
  defstruct [:value]

  @doc """
  Creates a new `t:t/0` object.

  ## Examples

      iex> amount = Amount.new(20_000)
      ...> Amount.value(amount)
      20_000
  """
  @spec new(Decimal.decimal()) :: t()
  def new(value) do
    %__MODULE__{value: Decimal.new(value)}
  end

  @doc """
  Returns the value of the `t:t/0`.

  ## Examples

      iex> amount = Amount.new(20_000)
      ...> Amount.value(amount)
      20_000
  """
  @spec value(t()) :: integer()
  def value(amount) do
    Decimal.to_integer(amount.value)
  end

  @doc """
  Check if the given amount is negative.

  ## Examples

      iex> amount = Amount.new(-20_000)
      ...> Amount.negative?(amount)
      true

      iex> amount = Amount.new(20_000)
      ...> Amount.negative?(amount)
      false
  """
  @spec negative?(t()) :: boolean()
  def negative?(amount) do
    Decimal.negative?(amount.value)
  end
end

In this case, it doesn’t seem to have much value, but I would argue that you shouldn’t care if I am dealing with the value a Decimal.t() or a string.

  1. Hide invariants:
defmodule PositiveAmount do
  @moduledoc """
  Value object representing a `Amount` that must be positive.
  This object is useful when you want to deal signed values to unsigned values
  with safely.
  """

  alias Amount

  @type t :: %__MODULE__{value: Amount.t()}

  @enforce_keys [:value]
  defstruct [:value]

  @doc """
  Creates a new `t:t/0` object.

  ## Examples

      iex> {:ok, amount} = PositiveAmount.new(20_000)
      ...> PositiveAmount.value(amount)
      20_000

  With some an negative amount:

      iex> PositiveAmount.new(-20_000)
      {:error, :negative_amount}
  """
  @spec new(amount :: Amount.t() | Decimal.decimal()) :: {:ok, t()} | {:error, :negative_amount}
  def new(%Amount{} = amount) do
    if Amount.negative?(amount) do
      {:error, :negative_amount}
    else
      {:ok, %__MODULE__{value: amount}}
    end
  end

  def new(value) do
    value
    |> Amount.new()
    |> new()
  end

  @doc """
  Returns the value of the `t:t/0`.

  ## Examples

      iex> {:ok, amount} = PositiveAmount.new(20_000)
      ...> PositiveAmount.value(amount)
      20_000
  """
  @spec value(t()) :: integer()
  def value(amount) do
    Amount.value(amount.value)
  end
end

In this case, the value object doesn’t exist simply to lift up the meaning and hide the underline type, but also to make sure that the invariants check are correct, therefore, people don’t have to figure out what it means to be “PositiveAmount”.

I hope that helps.

2 Likes

As far as I can remember, we ended up with the following value objects after tackling the “Primitive Obsession” and “Parse, Dot no Validate.”

Also, some value objects may not make sense to you, probably because you are missing out on the context and the why based on that given context. In the end, structs are just a way to provide a map of an identity; such an identity allows you to make assumptions about it. Sometimes, they are valuable due to invariants, and other times, they are beneficial due to assumptions made instead of validating all over the place. Context > Consistency

  • Currency
  • Money
  • Amount
  • TransferAmount
  • PositiveAmount
  • CreditAmount
  • DebitAmount

What to avoid is being too obsessive about creating wrapping functions instead of calling Map directly, focusing primarily on factory functions to ensure invariants above everything, and having less to do with hiding fundamental values. I wish I could find Rich Hickey’s video in which he talks about this.

I might not be reading you well but why is “Parse, Don’t Validate” an anti-pattern?

1 Like

I am not sure what I wrote or how it could be interpreted! Maybe I misspoke or said backward … I meant to say, “It is good to parse instead of validate.” :sob: Changing the wording, just in case, let the user decide what it is an anti-pattern :sob: English is too hard sometimes

Value objects are there to enforce contraints and in DDD-land: business invariants. The point is to ensure that you have valid data and structures at all times. It also makes it MUCH easier to ensure you’re always working with that data by declaring your struct dependencies defined in those modules.

I use them everywhere in my game I’m building, including for IDs. IDs might seem like a bit of an “obsessive” approach, until you make one simple mistake where you pass the wrong id as an argument, and poison data. Do that once, and you very quickly see the value in having IDs as value objects.

But not everything needs to be. Value objects have their real value in encapsulating logic that validates the values they’ll hold. They’re also good for pairing related data together, such as an Address or phone number (country code, area code, number.etc.). You can’t have a valid phone without those details, so grouping it together makes a lot of sense.

Ask me how I know :sob: … now a days I have VO for IDs as well

The problem I encounter everywhere is that the VOs are Ecto schema structs which, wherever they’re used, couples all the storage-agnostic logic to the database structure.

As far as I can tell, Ecto.Schema and Ecto.Changeset isn’t about database per se. I understand what you are saying, and definitely people should be careful. While also, it doesn’t hold water. The Ecto.Changeset is about validation, and Ecto.Schema is about structs, mappings, and Ecto.Type is about a shared interface around marshalling, unmarshalling.

Ecto.Repo, and Ecto.Query; maybe, it would be more about actual DB.

Being said, :man_shrugging:

Free yourself of this mentality before it’s too late.

2 Likes

They aren’t, and you can have separate schemas and changesets for your forms! In small apps I never feel this is worth it, but we do this at work with more gnarly forms.

Gotta call this one out as Changesets are not just about validation but also about, ahem, changes. Sure they go hand-in-hand, but so many Phoenix apps do a lot of parameter manipulation in the controller/LiveView layer before passing them to a changeset. I know it’s in many other apps evident by the number of people who ask how you deal with maps that may have string or atom keys… you don’t! You use the changeset API!

K, rant over.

4 Likes

Please elaborate - I don’t see why? How do you otherwise ensure you get the appropriate structures/values whenever you depend on them?

My reply was a link to a very detailed answer to your question. But, briefly, you ensure they are appropriate as you consume them.

The transition state between two valid states is often itself an invalid state. If you approach this with the mentality that the state must be valid “at all times”, whatever straitjacket of a data structure you cook up to achieve this will necessarily be incapable of representing transitions. And then it is you who will be cooked, I’m afraid.

My experience has proven otherwise, for over a decade. If I understand you correctly, you’re saying that if I accept an argument, I then validate that argument in every function that expects it? Why would you do that, when you can do that in one place, and one place only, using the value object pattern?

I also read the article, it’s front-end focused. To that point, I largely agree with the author, and it was a good read. However, we’re not talking about frontend, here - we’re talking about value objects on the backend. Perhaps I’m missing something, but let’s take the ID argument above - how would you handle that without a value object implementation in elixir?

IMHO, expecting a particular struct/value type solves the problem almost entirely. At the very least, when dealing with IDs from Ecto, you’ll never end up passing the wrong id to a function, say a relationship that expects that value.

2 Likes

I never spoke against this “value object pattern”. I only suggested that you free yourself of the mentality that data must be “valid” at all times, because that mentality will hold you back. Especially if you’re writing games, which are like frontend on steroids.

I think it would be easy to poke some holes in the value object idea (what if I hand you %Id{id: "evil nonsense"}), but I don’t want to do that because in the limit I think it would be an argument against type systems, and I don’t actually have a problem with type systems, so long as they are not accompanied by the aforementioned mentality.

If you have read and enjoyed the article then you understand the problem and my work is done, I have nothing else to add :slight_smile:

You didn’t yourself make the point, but the article you linked is fairly polemical. It calls out an entire class of BE devs en masse and accuses them not only of not being well-equipped to handle types of FE complexity, but also poorly designing their own systems by insisting so much on validation. I don’t think the article does a great job backing up that argument. This passage seems to be the most direct treatment, and it’s still fairly vague:

The trigger vs memo problem also happens on the back-end, when you have derived collections. Each object of type A must have an associated type B, created on-demand for each A. What happens if you delete an A? Do you delete the B? Do you turn the B into a tombstone? What if the relationship is 1-to-N, do you need to garbage collect?

If you create invisible objects behind the scenes as a user edits, and you never tell them, expect to see a giant mess as a result. It’s crazy how often I’ve heard engineers suggest a user should only be allowed to create something, but then never delete it, as a “solution” to this problem. Everyday undo/redo precludes it. Don’t be ridiculous.

Maybe I’m missing the real point here, but this doesn’t sound like an average BE type problem. If this standard parent-child type relation, then there are plenty of patterns that don’t involve forbidding deletes, baked into most DBs in fact (do cascading deletes count as “garbage collection”?). Maybe it’s caching, which, yes, we are aware can be quite complex. But then I’m not sure how it relates to the issue of “intent” at all, since the user is not involved at that point.

The closest BE issue that comes to mind would be robustness in API design. Of course we want to try preserve the user’s intent as much as possible and even when we end up running into problems it’s good to be able to preserve that intent in the system so we can provide good feedback. But at the end of the day I think modern FE is just really complex with tons of user input being rapidly provided and so obviously validation is less useful and the balance is going to be a lot different compared to a web API.

But then at the end of the article the author seems to imply that internal APIs (BE/FE splits) are themselves a bad, or at least obsolete idea so

1 Like