You can run them when your application starts, as proposed by @LostKobrakai. Alternatively, you can add your own script to the bin directory, that you use to run migrations and then start the release. You can do this by adding a step to your mix.exs:
Imo the “steps” abstraction is more powerful and composable. One could probably build a small purpose build library, which handles all the dirty work of such boot hooks and supply its assembly to the steps listed in mix.exs.
So it’s recommended to either call the migrations as mix task during deployment, separately from starting the application now, or maybe I should run the function that does migrations on every application start up?
I honestly don’t see much harm in doing the later but I may be missing something.
Interesting if it could lead to race conditions in a distributed setup. But, well, maybe in such a setup, one does not restart all nodes at the exact same time.
I think migrations run wrapped in a transaction (one transaction per migration if I recall) but yes, that’s a concern since I can imagine situation where with multiple application instances starting up things can go sideways.
Yes, I can confirm that this approach can go sideways. Unwinding the resulting mess as a consultant pays the bills, but I’d rather spend the time creating new stuff. Much better to separate applying migrations from app execution.
To be fair, thinking about it now, I think running migrations on app start up and as post-deploy hook that’d run on all instance is equally bad. So this problem can also happen with distillery / hooks triggering running migrations.
Kinda late (well, quite late actually) but marked @josevalim 's post as solution.
We use AWS ECS (Fargate).
In our production code we have tried:
Create migrator module (which is a part of actual application modules so shipped in release) and call the entry point function right before starting application, done via eval command on release script.
This is basically following the instruction in the latest Phoenix guide provided in the solution.
It assumes migration table is locked so even if multiple nodes existed, migration actually run only once.
Eventually we changed the process to invoke sole-purpose, one-off “migrator” ECS Task to perform migration before actually updating ECS Service Tasks.
Now migrations are attempted exactly once per deploy, and if failed, entire deploy process to stop. No duplicated migration invocation, no dependency to migration locks.