Is there a way to automatically preload an association after an insert in Ecto?

Hello everyone!

Is there any magic to automatically preload an association after an insert in Ecto? Im doing

case ComponentType.changeset(%ComponentType{}, attrs)
    |> Repo.insert() do
      {:ok, component_type} ->
        {:ok, Repo.preload(component_type, [:vendor])}

      {:error, changeset} ->
        {:error, changeset}

But I was looking for something like

ComponentType.changeset(%ComponentType{}, attrs)
  |> Repo.insert()
  |> Repo.preload([:vendor)

My schema looks like

@primary_key {:id, :binary_id, autogenerate: true}
  @foreign_key_type :binary_id
  @derive {Phoenix.Param, key: :id}
  schema "component_types" do
    field :name, :string

    belongs_to :vendor, Vendor
    has_many :components, Component

    timestamps type: :utc_datetime

  @required_fields [:vendor_id, :name]

  @spec changeset(t(), map()) :: Ecto.Changeset.t(t())
  def changeset(component_type, attrs) do
    |> change()
    |> cast(attrs, @required_fields)
    |> foreign_key_constraint(:vendor_id)
    |> validate_required(@required_fields)

Thanks a lot!

You’d have to make yourself a helper function:

def preload({:ok, enetity}, preloads) do
  {:ok, Repo.preload(entity, preloads)

def preload({:error, error}) do
  {:error, error}

I personally don’t think the extra indirection is worth it. I always do it with pattern matching like in your example. If you wanted to tighten it up you can use with—this is equivalent to your example:

changeset = ComponentType.changeset(%ComponentType{}, attrs)

with {:ok, component_type} <- Repo.insert(changeset) do
  {:ok, Repo.preload(component_type, [:vendor])}

@sodapopcan thanks! Yeah I was hoping to find a flag or something that let me automatically preload the association, but that’s fine.

This is how I did it on recent project.
But one difference and I’m curious for opinion, I do:

with {:ok, component_type} <- Repo.insert(Type.changeset(params)) do
  {:ok, Repo.preload(component_type, [:vendor])}

and then in type I would have something like

def changeset(params, struct // %__MODULE__{}) do
  |> change(params)
  |> (&if(is_nil(, do: put_change(&1, %{created_at:}), else: &1)).()
  |> ...

In this way I unified changeset for create and update so I can reuse the changeset function for Repo.update for instance. But maybe I shouldn’t do that :sweat_smile:

1 Like

OMG what an abomination! At least extract it to a private function called maybe_put_created_at. People do it for such conditional changes all the time.


I consider abomination something that breaks the line, although I crank the line_length: 135 :joy:

1 Like

If it’s really just about using created_at over inserted_at, you can change that in your migrations:

timestamps(inserted_at: :created_at)

and then in your schema:

@timestamp_opts [inserted_at: :created_at]

For convenience, you can configure migrations to do this automatically:

config :my_app, MyApp.Repo, migration_timestamps: [inserted_at: :created_at]

…now you can just do timestamps(). Then make a custom schema module to use:

defmodule MyApp.Schema do
  defmacro __using__(_) do
    quote do
      @timestamp_opts [inserted_at: :created_at]

Otherwise I agree with @dimitarvp. Extract a maybe_ function if you’re going to do stuff like that. Scannability is important! Although if you are the only reading your code you can do as you wish, of course.


Ya, I’ve wanted something like that in the past but I grew to appreciate status quo. I think preloading deserves and extra call-out and we should be careful about when and where we do them. That said, if Repo.insert were to get a :preload option would I use it? Ya, probably!

1 Like

It’s not about created_at, it’s about having one changeset function for a given struct. Usually insert requires fields, update doesn’t. Also in what way my anonymous function is not scannable? It’s one line of code that reads like a sentance. Moving it to a function makes it a simpler sentance to read but is less scannable by the fact I need to scan more with my eyes to get the whole picture. But I do get what you are saying, my pursuit for less code - more functionality could affect badly readability/scanability in the long run.

In that case having create_changeset/1 and update_changeset/2 functions could be nicer.
Another interesting approach is doing casting/validations in the context function as described here with the create_post/3 function.


It goes without saying that this is all subjective, but I’m saying it anyway :sweat_smile:

You’re confusing scannable with readable. Good scannable code is about pattern recognition and in many simple cases you shouldn’t even need to the read the whole line, hence “scanning”. For this to work you need to be stay idiomatic with the language. I’m gonna go out on a limb and say there is nothing even remotely idiomatic about piping into a self-invoking anonymous function.

But ya, if you and your team decide this is ok and it becomes a ubiquitous pattern then that’s all well and good. I’m just responding to your request for feedback!