defmodule Vae.Repo.Migrations.SubmitAspApplications do
use Ecto.Migration
import Ecto.Query, only: [from: 2]
def up do
query = from a in Vae.Application, where: not is_nil(a.submitted_at)
Enum.map(Vae.Repo.all(query), &Vae.Application.maybe_autosubmit/1)
end
def down do
end
end
But when I try to run it, I get the following error:
You run your migrations, this will bring the data into the correct shape.
What? Sending emails from migrations? Migrations are to update the database schema to match the definitions in your application, not to send out emails…
You can, in theory, force your application to start during the migration by using Application.ensure_started/1 (IIRC), that will start your application and all of its dependencies.
But this again might also cause your application to access the database while it is in an invalid state.
It is better to not rely on your application during migrations. At least that is my opinion and in no way authorative.
def up do
case Application.ensure_all_started(:phoenix) do
{:ok, _rest} ->
query = from a in Vae.Application, where: is_nil(a.submitted_at)
Enum.map(Vae.Repo.all(query), &Vae.Application.maybe_autosubmit/1)
{:error, _msg} -> nil
end
end
I would try and get away with putting the URL in question as a configuration value – or if you don’t feel comfortable hardcoding it, you can put it in ETS – and use that in the migrations.
Oh, and never use Ecto schemas in migrations. Later your data structure will be different and your CI (or new hires machines) will fail the migration.
Either use shemaless Ecto (look for the Schemaless queries section), or use plain SQL.
I get the whole “don’t load any app logic in migrations” thing since code will move along commits while we don’t know when the migration will be run. I prefer to use “nothing” instead of “this feature of phoenix you can, but this one not”.
Coming from the Rails world, this feels like a restriction, but it may avoid issues I’ve had in the past.
But then how would you handle the following logic:
In my Application model, I have a submitted_at attribute, which is user-triggered and sends email.
For a sub-part of my Applications they’ll be auto-submitted during the creation process.
But then I need to make sure all existing Applications from this sub-part are in the correct state of being submitted.
How would you send the emails regarding those applications at this stage?
In Rails, I’ve always felt the need of a distinction between schema migrations and data migrations the former being simply about schema while the latter does update the data, and could, in this case, load the app context.
There is a gem that does this, but it should be native IMHO.
Do you reckon the same should apply here in phoenix?
Other solution I see feel dirty or over-complicated:
cron job to send emails looking at newly submitted applications
Even with the difference of schema- and datamigrations in mind, I do not see why sending an email should be done in a migration, that only should be run once at all (or even worse might cause sending of emails multiple times a day from a CI system).
I do not know anything about your application, its schema or its purpose. But if I need to send an email, I create a job, and perhaps a DB entry describing it, let the job run as a process in the application and after it has suceeded I mark the job description in the database as done.
Some other process checks the job descriptions in the database for stale ones and re-runs them as necessary.
It doesn’t really matter if we have Phoenix or Rails. It is just a question how you design your application and the deployment process.
Like @NobbZ said, it is usually a disadvantage to execute business logic or data changes in migrations for number of reasons, here are a few:
Models, schema might change in the future. Your migrations won’t be able to run, unless you change them as well.
Production environments usually have much more data, than on your dev machine or CI. I’ve often seen developers trying to push a data change that would run on production for hours.
Danger to invoke a use case multiple times. Imagine you are deploying, then performing some action for a batch of users and there is an exception in the middle. If you don’t keep track which users you already processed, next time you try to invoke the script - you will process the users again.
The ruby gem you posted, while looks nice and convenient - has the same flaws. I would not recommend to use it in our team. It is much safer to have a mix task (rake task) and execute it manually in a controlled way.
Yeah, you should not be really doing that in Rails either. It is the same principle: in migrations just execute SQL, either written by hand or written using framework-provided DSL. If you have to migrate some data using helpers/models/modules/classes of said application, create a separate script and run it once you deployed new version of code, then get rid of the script and do not pollute your migrations with stuff that is going to break in the future.