Hi Friends, I have a question on Ecto dynamic migrations.
I am creating a new field (slug) in an existing table articles . I want to make it non-nullable, but before I do that I have to set slugs for the existing article. So here’s what the up migration looks like:
def up do
alter table(:articles) do
add :slug, :text
end
flush()
Repo.all(from a in Article)
|> Enum.map(fn a ->
slug = Article.slugify_title(a.title)
Ecto.Changeset.cast(a, %{slug: slug}, [:slug])
end)
|> Repo.update_all([])
flush()
alter table(:articles) do
modify :slug, :text, null: false
end
flush()
create unique_index("articles", [:slug])
end
However, when I run the above migration, I get the following error:
protocol Ecto.Queryable not implemented for [#Ecto.Changeset<action: nil, changes: %{slug: "how-to-query-with-ecto-1"}, errors: [], data: #ConduitElixir.Articles.Article<>, valid?: true>] of type List. This protocol is implemented for the following type(s): Ecto.Query, Atom, Ecto.SubQuery, Tuple, BitString
Not sure why I am getting this error, as I have a list of changesets before I try and use Repo.update_all() . Any insights appreciated. Thank you
I was trying to replicate what’s explained in the accepted answer to this SO question:
Not sure if that answer is still relevant after 4 years.
Can you guide me on how to do what I am trying to do please. I could nuke the DB and could add slug column in an existing migration, but certainly that’s now how non-nullable columns are added in production.
You have the right idea but you’ll need to call Repo.update on each changeset (probably via Enum.each)
Also you shouldn’t depend on an externally defined Ecto schema in a migration (possibly even externally defined functions) because if you change the schema in the future the migration might break because it is incompatible with the newer version of the schema.
Best I can determine, that answer was wrong 4 years ago and it’s still wrong now. Repo.update_all/3 is present in Ecto all the way back to 1.x, but it’s never accepted a list of changesets.
Potentially-slow operations like “write to every article row” don’t belong in a migration, IMO - here’s how I’ve done migrations like this in production:
deploy a migration that adds a nullable slug column and a unique index
deploy the code that starts writing values in slug
run code to backfill values into slug (via background jobs, or from the console, YMMV). This may take a long time, or you may want to artificially slow it down to keep the database load from spiking
deploy a migration that makes the column not-null. This should be fast, as the column is indexed and has no nulls