Run migrations on Docker Production

I managed to deploy my Phoenix App on an external virtual server using Docker. The only thing missing is the call of “migrate” to run the migrations before the server starts.

I started using the official Phoenix Dockerfile and added a docker-compose.yml with an additional Postgres container. The server starts without migrations because of the “CMD [”/app/bin/server"]" statement and the end of the Dockerfile.

I tried to add “RUN /app/bin/migrate” just before, but then I get an error that the environment variable DATABASE_URL is missing. But actually it’s set in the docker-compose.yml file for the Phoenix container. It’s obviously set correctly as the server starts with no issues.

What is the correct way to run the migrations?

Btw. I would also like to run seeds.ex, but I guess I have to add this in the release.ex module.

You should remember that you don’t have mix after you created a release version, instead you can use Ecto.Migrator for those tasks.

I’ve tried several methods of doing migrations, and the best method on running production environments is to run migration via remote console, you can do this by starting a bash console inside of your container.

Another method is to create something like a GenServer for migration and add it to your application supervision tree, this way your migrations will run automatically everytime the server is started.

If you don’t want your system to run while migration is in progress, the best bet is to create a script that will execute a rpc call to your migration function, then start the server.

1 Like

The script /app/bin/migrate actually uses Ecto.Migrator, so this is not an issue. I struggle with the call of this in the Dockerfile.

instead of calling /app/bin/server call your script and at the last line start the server.

I tried this (the last two lines of my Dockerfile):

RUN /app/bin/migrate
CMD ["/app/bin/server"]

But there’s an error saying:
environment variable DATABASE_URL is missing

But it’s set in the docker_compose.yml:

version: “3”

networks:
internal:
external: false

services:
app:
image: server
build:
context: .
volumes:
- /etc/letsencrypt:/etc/letsencrypt
environment:
- SECRET_KEY_BASE=fsfsdfsdfsFSFSDFDSFdsfeudUVXUcvtroz2AXRzHsX9u3ZGCmR15gvLY8d
- DATABASE_URL=ecto://postgres: sdfsdftRETERT@db/ivv_server_prod
ports:
- 443:443
networks:
- internal
depends_on:
- db

db:
image: postgres
volumes:
- /var/lib/postgresql/ivv-server/:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=sdfsdftRETERT
- POSTGRES_DB=ivv_server_prod
networks:
- internal

this will not work, you are trying to execute migrations at the moment you build the container, you have to execute them when the container is started. This means that your migration script should be in last CMD.

I haven’t figured out how to change my Dockerfile to run migrations automatically after the firing up the server using “docker-compose up”. Anyway, this isn’t so important as I can log into the container and start the “migrate” script manually.

But I still haven’t found a solution to run seed.exs which contains all the master data. I have found threads where people are talking about using Code.eval_file. But the script stops when trying to build the path to the seed.exs file with an error saying that module mix is not available. I have this in my release module:

def seed do
  load_app()
  priv_dir = "#{:code.priv_dir(:ivv_server)}"
  seed_file = Path.join([priv_dir, "repo/seeds.exs"])
  Code.eval_file(seed_file)
end

If I don’t find a solution I will have to create an sql_loader script. Does anybody have similar requirements?

I haven’t checked but at least if you want docker run ... to “automatically” run a command then you do it e.g. like this:

# ...

ENV MIX_ENV=prod
RUN mix deps.get
RUN mix deps.compile
RUN mix compile
RUN mix release

CMD _build/prod/rel/<project>/bin/<project> eval "YourMigratorToolsModule.migrate()"

docker build ... will build everything except the final CMD whereas docker run ... will only run the last CMD, if memory serves.

But I am not sure if that even relates to doing docker-compose up. Maybe it doesn’t, haven’t checked.

1 Like

I’m using Elixir releases with a Dockerfile similar to the one in the Phoenix guides. Here’s the last line:

CMD ["sh", "-c", "bin/app eval MyApp.Release.migrate && bin/app start"]

If the migrations fail, the app is not started and the whole deployment fails (which is what I want).

6 Likes

Thanks, this is what I was looking for!

1 Like

In addition, I would recommend to put the exec command before start the app.

CMD ["sh", "-c", "bin/app eval MyApp.Release.migrate && exec bin/app start"]

Without this You’ll not be able to use Ctrl+C to kill the process when you start the app with docker run.

In my case I’ve create a new shell file in my release folder called migrate_and_server with this:

#!/bin/sh
cd -P -- "$(dirname -- "$0")"
./app eval MyApp.Release.migrate && PHX_SERVER=true exec ./app start

Or you can just use this way:

#!/bin/sh
cd -P -- "$(dirname -- "$0")"
./app eval MyApp.Release.migrate && exec ./server
2 Likes

Would be Googler’s heres for Phoenix 1.7.0 and above.

I ran mix phx.gen.release --docker and it generated the Dockerfile for me.

It also generated a migrate and server file.

in rel/overlays/bin/server...

#!/bin/sh
cd -P -- "$(dirname -- "$0")"
PHX_SERVER=true exec ./my_app start

in rel/overlays/bin/migrate...

#!/bin/sh
cd -P -- "$(dirname -- "$0")"
exec ./my_app eval MyApp.Release.migrate

At the end of the generated Dockerfile all I needed to do was:

# BAD: 
CMD ["/app/bin/server"]

# GOOD:
CMD ["sh", "-c", "/app/bin/migrate && /app/bin/server"]

Now if my migrations don’t run the server doesn’t start which is what I want. Hope this helps.

This is a good default, and if you want you can always manage migrations on the server itself.

BTW the release for docker is amazing, finally I dont have to copy docker scripts from other projects, it’s interesting how projects with js are hadled though.