Squashing Schema Migrations

It’s worth mentioning that if you need to run migrations from within a release, you have to be rework the commands and watch out for the file path.

Whenever we do a major release we delete all the migration files and dump the database structure to a file (e.g. structure.sql) and use mix ecto.load to load the structure into the database. That way we have a history of the migrations, albeit through git, and a clean migrations directory. :slight_smile:

I like that idea.

For the record, how do you reference the path to that file? In the past I’ve gotten tripped up referencing paths in a way that’s compatible with releases…

We only keep one file, which is sufficient enough for us, in the default directory (priv/YOUR_REPO/structure.sql).

I was researching this topic again today, because we have 166 migration files today, and sometimes it leads to debugging sessions which are a bit more complicated than necessary (e.g. a search vector updated in database via some SQL function, called via a trigger, etc).

I stumbled upon this which seems nice:

Curious to know if anyone tried it already!


We have come up with the following strategy to deal with squashing migrations on a periodic basis.

  1. Run mix ecto.dump or the equivalent database command
  2. Rename the generated structure.sql file to something the better communicates that this represents a snapshot of the database structure at a certain point in time, e.g. snapshot-2023-11-10.sql
  3. Remove the old migrations (celebrate if you wish)
  4. Create a new migration that references the dump file from above. This helps avoid frustration when developers unfamiliar with the process have to run yet-another-mix-command to bootstrap the database. They may run into problems getting the psql CLI installed, for example. You can write a simple migration that parses the dump file like this:
defmodule MyApp.LoadStructureSqlMigration do
  use Ecto.Migration

  require Logger

  def change do
    |> :code.priv_dir()
    |> Path.join("snapshot-2023-11-10.sql")
    |> File.read!()
    |> String.split(";\n")
    |> Enum.each(fn statement ->

Splitting on ;\n will not work in some cases. Here for instance.

But I bet you can just execute the whole script at once.

IIRC there are errors if you try to execute more than one statement at a time – that’s why we ended up with the split on ; thing.

Good reminder though: some dump files may not be so easily parsed.

1 Like

Yes, but my opinion is biased, as I led the development of the library :smiley: