Ecto.load as migration?

There is ecto.load task to load initial structure of the store. I was curious if there is any migration DSL for loading the structure? Something like:

load_structure "path/to/sql_dump"

ATM I am resorting to this:

defmodule Repo.Migrations.StructureInit do
  use Ecto.Migration
  import Mix.Ecto
  def up do
    Repo.__adapter__.structure_load(source_repo_priv(Repo), Repo.config)

  def down do
    raise "Irreversible"

this works but I have to expose Repo and import Mix.Ecto stuff to the migration script which is not as nice as the rest of migrations that have proper DSL.

The point of mix ecto.load is to be used in combination with mix ecto.dump. The dump, also dumps the contents of the schema_migrations table, which tracks which migrations have run. The point of dump is to act as a snapshot of the migrations that have run already. So, rather than going through migration 1, 2, 3, you can go from 0 to 3 directly without the intermediary steps. Since the point of dump and load is to skip migration steps, it doesn’t make sense to me that you’d use a migration to run load.

the project I am working on was spawned from an existing project that had gazillion of migrations. The decision was made (I was not involved in that decision) to do a sql dump and do ecto.load for the newly spawned project then create additional migrations going forward. schema_migrations table was scratched. So in essence the initial structure is one big migration, accompanied by a set of smaller ones.

ecto.load as migration can be rather convenient if your project did not use ecto’s migrations from the start (yes horror, I know but those projects do exist) and you do not want to go and create migrations retro-actively. Migration DSL does not support all flavors of SQL so there is a cop-out execute “<SQL statement>”. To me this means load_structure is not so crazy a request. I can come up with a dozen other ways this might be useful and what the circumstance in which load_structure/execute_bulk could/would be a solution.

1 Like

Maybe I’m misunderstanding you but I would just dump the entire DB structure in a SQL file (many DBs can do that with one command), without the data, and just execute that schema-creating SQL in the first official migration.

Then I would go the extra mile to manually create a reversing SQL statement for migration’s down function – this is just a list of statements to drop tables / views.

From then on you can resume normal work.

I strongly advise against using undocumented APIs. They might disappear without warning at any time.

I think you want to use this public API instead:

You cannot scratch the schema_migrations table and should keep track of which of the old migrations have already been applied through the dump & load. This should allow you to seamlessly pick up any new migrations, until you do another dump & load cycle, which would then again have to add the newly applied migrations to the schema_migrations table. builds on top of that to run this from your production DB and to also extract seed/lookup tables, indexes and permission/roles.

Trying to do something similar and am wondering if anyone has any pointers?

Essentially, we’re migrating another project with an existing Postgres database to Phoenix. My goal is to load our existing database dump in as migration 0, then to use migrations going forward. Right now I have a schema-only dump, and eventually I’d like to take a production dump of our data as well and modify migration 0 to massage our incoming data, after which point we’ll use migrations both in development and in production. Right now I’m doing:

defmodule Scribe.Repo.Migrations.InitializeSchema do
  use Ecto.Migration

  def change do
    execute "CREATE EXTENSION IF NOT EXISTS \"pgcrypto\""
    {:ok, sql} ="priv/repo/structure.sql")

My SQL loads fine if I import it manually, but this migration gives me:

** (Postgrex.Error) ERROR 42601 (syntax_error) cannot insert multiple commands into a prepared statement

I’m guessing execute is only meant for single statements, not a full SQL script? Is that correct? If so, how can I run this script as part of my first migration?

I suppose I could do ecto.load, but I’d really like a straight migration path if possible (I.e. ecto.migrate does everything so we have the greatest compatibility with buildpacks, deployment tooling, etc.)


1 Like