Load the phoenix context in migration eg. URL helper

I juste wrote this data migration:

defmodule Vae.Repo.Migrations.SubmitAspApplications do
  use Ecto.Migration
  import Ecto.Query, only: [from: 2]

  def up do
    query = from a in Vae.Application, where: not is_nil(a.submitted_at)
    Enum.map(Vae.Repo.all(query), &Vae.Application.maybe_autosubmit/1)

  def down do

But when I try to run it, I get the following error:

** (ArgumentError) argument error
    (stdlib) :ets.lookup(Vae.Endpoint, :__phoenix_url__)
    (phoenix) lib/phoenix/config.ex:45: Phoenix.Config.cache/3
    web/router.ex:1: Vae.Router.Helpers.application_url/4
    web/emails/application_email.ex:16: Vae.ApplicationEmail.delegate_submission/1
    web/models/application.ex:47: Vae.Application.submit/2
    web/models/application.ex:67: Vae.Application.maybe_autosubmit/1
    (elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
    (elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
    (stdlib) timer.erl:197: :timer.tc/3
    (ecto) lib/ecto/migration/runner.ex:25: Ecto.Migration.Runner.run/6
    (ecto) lib/ecto/migrator.ex:128: Ecto.Migrator.attempt/6
    (ecto) lib/ecto/migrator.ex:72: anonymous fn/4 in Ecto.Migrator.do_up/4
    (ecto) lib/ecto/adapters/sql.ex:576: anonymous fn/3 in Ecto.Adapters.SQL.do_transaction/3
    (db_connection) lib/db_connection.ex:1283: DBConnection.transaction_run/4
    (db_connection) lib/db_connection.ex:1207: DBConnection.run_begin/3
    (db_connection) lib/db_connection.ex:798: DBConnection.transaction/3
    (ecto) lib/ecto/migrator.ex:261: anonymous fn/4 in Ecto.Migrator.migrate/4
    (elixir) lib/enum.ex:1327: Enum."-map/2-lists^map/1-0-"/2
    (ecto) lib/mix/tasks/ecto.migrate.ex:83: anonymous fn/4 in Mix.Tasks.Ecto.Migrate.run/2
    (elixir) lib/enum.ex:769: Enum."-each/2-lists^foreach/1-0-"/2

So it fails at Helpers.application_url(Endpoint, :show, application).

My coworker tells me it is because the Phoenix context is not loaded during the migrations.

But then, how can I make sure my data is in the correct state, and how shall I send emails?

Can I get the phoenix context in the migrations? Or am I doing it all wrong?

Thanks a lot!

You run your migrations, this will bring the data into the correct shape.

What? Sending emails from migrations? Migrations are to update the database schema to match the definitions in your application, not to send out emails…

Thanks for quick reply.

Forget the email thing, it is not my point here: I’m asking how can I call Helpers.application_url(Endpoint, :show, application) within a migration?

You can, in theory, force your application to start during the migration by using Application.ensure_started/1 (IIRC), that will start your application and all of its dependencies.

But this again might also cause your application to access the database while it is in an invalid state.

It is better to not rely on your application during migrations. At least that is my opinion and in no way authorative.


I have tried that, but I still get the error:

  def up do
    case Application.ensure_all_started(:phoenix) do
      {:ok, _rest} ->
        query = from a in Vae.Application, where: is_nil(a.submitted_at)
        Enum.map(Vae.Repo.all(query), &Vae.Application.maybe_autosubmit/1)
      {:error, _msg} -> nil

You do not want to start phoenix, at least not primarily, you want to start :your_app.

I would try and get away with putting the URL in question as a configuration value – or if you don’t feel comfortable hardcoding it, you can put it in ETS – and use that in the migrations.

Oh, and never use Ecto schemas in migrations. Later your data structure will be different and your CI (or new hires machines) will fail the migration.

Either use shemaless Ecto (look for the Schemaless queries section), or use plain SQL.


Thanks for those extra infos.

I get the whole “don’t load any app logic in migrations” thing since code will move along commits while we don’t know when the migration will be run. I prefer to use “nothing” instead of “this feature of phoenix you can, but this one not”.

Coming from the Rails world, this feels like a restriction, but it may avoid issues I’ve had in the past.

But then how would you handle the following logic:

  • In my Application model, I have a submitted_at attribute, which is user-triggered and sends email.
  • For a sub-part of my Applications they’ll be auto-submitted during the creation process.
  • But then I need to make sure all existing Applications from this sub-part are in the correct state of being submitted.
  • How would you send the emails regarding those applications at this stage?

In Rails, I’ve always felt the need of a distinction between schema migrations and data migrations the former being simply about schema while the latter does update the data, and could, in this case, load the app context.

There is a gem that does this, but it should be native IMHO.

Do you reckon the same should apply here in phoenix?

Other solution I see feel dirty or over-complicated:

  • cron job to send emails looking at newly submitted applications
  • curling the app to trigger the action
  • ?

Thanks for the help

Even with the difference of schema- and datamigrations in mind, I do not see why sending an email should be done in a migration, that only should be run once at all (or even worse might cause sending of emails multiple times a day from a CI system).

I do not know anything about your application, its schema or its purpose. But if I need to send an email, I create a job, and perhaps a DB entry describing it, let the job run as a process in the application and after it has suceeded I mark the job description in the database as done.

Some other process checks the job descriptions in the database for stale ones and re-runs them as necessary.

No migrations involved.


I don’t think I am being clear.

The migration is here to auto-submit to catch up with existing Applications as will the future ones be during creation.

So I need to set submitted_at to those applications. And my app process sends an email after submission. Where should I handle this logic?

Regarding CIs, hopefully the code will be executed in a different environment (test?) where emails are handled differently (not sent, or redirected).

Leave the field “empty”, NULL or whatever.

Let your application discover that the state is “faulty” and let it repair it by sending out the mails.

Yes, of course, they should… But no-one guarantees. You might have bugs in the code that swaps out the mail sending part…

It doesn’t really matter if we have Phoenix or Rails. It is just a question how you design your application and the deployment process.

Like @NobbZ said, it is usually a disadvantage to execute business logic or data changes in migrations for number of reasons, here are a few:

  • Models, schema might change in the future. Your migrations won’t be able to run, unless you change them as well.
  • Production environments usually have much more data, than on your dev machine or CI. I’ve often seen developers trying to push a data change that would run on production for hours.
  • Danger to invoke a use case multiple times. Imagine you are deploying, then performing some action for a batch of users and there is an exception in the middle. If you don’t keep track which users you already processed, next time you try to invoke the script - you will process the users again.

The ruby gem you posted, while looks nice and convenient - has the same flaws. I would not recommend to use it in our team. It is much safer to have a mix task (rake task) and execute it manually in a controlled way.


Yeah, you should not be really doing that in Rails either. It is the same principle: in migrations just execute SQL, either written by hand or written using framework-provided DSL. If you have to migrate some data using helpers/models/modules/classes of said application, create a separate script and run it once you deployed new version of code, then get rid of the script and do not pollute your migrations with stuff that is going to break in the future.


Thanks all for your useful feedback.

I went for the mix task, which fits my need in this case. Though one must not forget to run it in each environment.