Change my mind: Migrations in a start phase

# in my mix.exs

  def application do
    [
      mod: {My.Application, []},
      start_phases: [{:migrate, []}],
      extra_applications: [:logger, :runtime_tools]
    ]
  end
# in lib/my/application.ex

  def start_phase(:migrate, _, _) do
    Ecto.Migrator.with_repo(My.Repo, &Ecto.Migrator.run(&1, :up, all: true))
    :ok
  end

Works everytime™

6 Likes

Interesting!

I think the main downside compared to having the migrator in your supervision tree is that it doesn’t let you control where the migration happens relative to other items in your supervision tree. For example, we have basically:

[
        libcluster_child(),
        Sensetra.Endpoint,
        {Absinthe.Subscription, Sensetra.Endpoint},
        Sensetra.Repo,
        Sensetra.Repo.Migrator,
        Sensetra.Ingestion.Super,
        ... other children
        DeploymentNotifier
]

This is important because it allows the Ingestion.Super process to be sure that any database changes it relies on have definitely have happened by that point because the Migrator has run.

You’ll also note that I start the Endpoint pretty early. The way that works is that the /alive path returns true, but the /ready path returns false. This lets Kubernetes know that the pod is alive and running, but is not yet ready to receive traffic. It can take its time to run migrations, get the process tree up and running, and then the DeploymentNotifier child sets an application environment value such that /ready returns true.

9 Likes

One reason would be long running migrations, which effectively block each instance of your app from starting up.

Which is not necessarily to say don’t do it. But, be careful what you do there. Some migrations are better to run live, like stuff that transforms data, fills in a column with a default, etc.

2 Likes