Equivalent to distillery's boot hooks in mix release? (Elixir 1.9)

Distillery has Boot Hooks, which can be utilized to e.g. run ecto migration scripts before application startup during deployment.

I am basically following “Running Migrations” guide in distillery, wiring the task onto pre-start hook.

Now that Elixir 1.9 and mix release come out, are there similar mechanisms?

Couldn’t you run them as part of application startup but before starting handling user input (like e.g. starting the phoenix endpoint)

1 Like

We don’t have hooks on purpose. :slight_smile:

You can run them when your application starts, as proposed by @LostKobrakai. Alternatively, you can add your own script to the bin directory, that you use to run migrations and then start the release. You can do this by adding a step to your mix.exs:

def project do
  [
    ...,
    releases: [
      my_app: [
        steps: [:assemble, &copy_bin_files/1]
      ]
    ]
  ]
end

where:

defp copy_bin_files(release) do
  File.cp_r("rel/bin", Path.join(release.path, "bin"))
  release
end

And you put the extra scripts in rel/bin.

7 Likes

I will examine both approaches. Thank you @LostKobrakai and @josevalim.

I am slightly inclined to “bring scripts with release” method since I would like to decouple scripts from application implementation.

BTW,

I am curious about the rationale here. Simplification, perhaps?

1 Like

Imo the “steps” abstraction is more powerful and composable. One could probably build a small purpose build library, which handles all the dirty work of such boot hooks and supply its assembly to the steps listed in mix.exs.

1 Like

Exactly!

So it’s recommended to either call the migrations as mix task during deployment, separately from starting the application now, or maybe I should run the function that does migrations on every application start up?

I honestly don’t see much harm in doing the later but I may be missing something.

Interesting if it could lead to race conditions in a distributed setup. But, well, maybe in such a setup, one does not restart all nodes at the exact same time.

I think migrations run wrapped in a transaction (one transaction per migration if I recall) but yes, that’s a concern since I can imagine situation where with multiple application instances starting up things can go sideways.

1 Like

Yes, I can confirm that this approach can go sideways. Unwinding the resulting mess as a consultant pays the bills, but I’d rather spend the time creating new stuff. Much better to separate applying migrations from app execution.

1 Like

To be fair, thinking about it now, I think running migrations on app start up and as post-deploy hook that’d run on all instance is equally bad. So this problem can also happen with distillery / hooks triggering running migrations.

1 Like

None (necessarily). You can use eval to run any code you want, including what Distillery would call a custom command (not to confuse with Distillery custom hooks). We have an example on the new Phoenix releases guides: https://github.com/phoenixframework/phoenix/blob/master/guides/deployment/releases.md#ecto-migrations-and-custom-commands (not yet deployed).

Ecto v3.0 actually locks the migration table when running migrations, so this is safe unless you disable the migration lock.

6 Likes

Kinda late (well, quite late actually) but marked @josevalim 's post as solution.

We use AWS ECS (Fargate).
In our production code we have tried:

  1. Create migrator module (which is a part of actual application modules so shipped in release) and call the entry point function right before starting application, done via eval command on release script.
    • This is basically following the instruction in the latest Phoenix guide provided in the solution.
    • It assumes migration table is locked so even if multiple nodes existed, migration actually run only once.
  2. Eventually we changed the process to invoke sole-purpose, one-off “migrator” ECS Task to perform migration before actually updating ECS Service Tasks.
    • Now migrations are attempted exactly once per deploy, and if failed, entire deploy process to stop. No duplicated migration invocation, no dependency to migration locks.

For now it is working quite fluently.