Run migrations on Docker Production

I managed to deploy my Phoenix App on an external virtual server using Docker. The only thing missing is the call of “migrate” to run the migrations before the server starts.

I started using the official Phoenix Dockerfile and added a docker-compose.yml with an additional Postgres container. The server starts without migrations because of the “CMD [”/app/bin/server"]" statement and the end of the Dockerfile.

I tried to add “RUN /app/bin/migrate” just before, but then I get an error that the environment variable DATABASE_URL is missing. But actually it’s set in the docker-compose.yml file for the Phoenix container. It’s obviously set correctly as the server starts with no issues.

What is the correct way to run the migrations?

Btw. I would also like to run seeds.ex, but I guess I have to add this in the release.ex module.

1 Like

You should remember that you don’t have mix after you created a release version, instead you can use Ecto.Migrator for those tasks.

I’ve tried several methods of doing migrations, and the best method on running production environments is to run migration via remote console, you can do this by starting a bash console inside of your container.

Another method is to create something like a GenServer for migration and add it to your application supervision tree, this way your migrations will run automatically everytime the server is started.

If you don’t want your system to run while migration is in progress, the best bet is to create a script that will execute a rpc call to your migration function, then start the server.

1 Like

The script /app/bin/migrate actually uses Ecto.Migrator, so this is not an issue. I struggle with the call of this in the Dockerfile.

instead of calling /app/bin/server call your script and at the last line start the server.

I tried this (the last two lines of my Dockerfile):

RUN /app/bin/migrate
CMD ["/app/bin/server"]

But there’s an error saying:
environment variable DATABASE_URL is missing

But it’s set in the docker_compose.yml:

version: “3”

external: false

image: server
context: .
- /etc/letsencrypt:/etc/letsencrypt
- SECRET_KEY_BASE=fsfsdfsdfsFSFSDFDSFdsfeudUVXUcvtroz2AXRzHsX9u3ZGCmR15gvLY8d
- DATABASE_URL=ecto://postgres: sdfsdftRETERT@db/ivv_server_prod
- 443:443
- internal
- db

image: postgres
- /var/lib/postgresql/ivv-server/:/var/lib/postgresql/data
- POSTGRES_DB=ivv_server_prod
- internal

this will not work, you are trying to execute migrations at the moment you build the container, you have to execute them when the container is started. This means that your migration script should be in last CMD.

I haven’t figured out how to change my Dockerfile to run migrations automatically after the firing up the server using “docker-compose up”. Anyway, this isn’t so important as I can log into the container and start the “migrate” script manually.

But I still haven’t found a solution to run seed.exs which contains all the master data. I have found threads where people are talking about using Code.eval_file. But the script stops when trying to build the path to the seed.exs file with an error saying that module mix is not available. I have this in my release module:

def seed do
  priv_dir = "#{:code.priv_dir(:ivv_server)}"
  seed_file = Path.join([priv_dir, "repo/seeds.exs"])

If I don’t find a solution I will have to create an sql_loader script. Does anybody have similar requirements?

I haven’t checked but at least if you want docker run ... to “automatically” run a command then you do it e.g. like this:

# ...

RUN mix deps.get
RUN mix deps.compile
RUN mix compile
RUN mix release

CMD _build/prod/rel/<project>/bin/<project> eval "YourMigratorToolsModule.migrate()"

docker build ... will build everything except the final CMD whereas docker run ... will only run the last CMD, if memory serves.

But I am not sure if that even relates to doing docker-compose up. Maybe it doesn’t, haven’t checked.

1 Like

I’m using Elixir releases with a Dockerfile similar to the one in the Phoenix guides. Here’s the last line:

CMD ["sh", "-c", "bin/app eval MyApp.Release.migrate && bin/app start"]

If the migrations fail, the app is not started and the whole deployment fails (which is what I want).


Thanks, this is what I was looking for!

1 Like

In addition, I would recommend to put the exec command before start the app.

CMD ["sh", "-c", "bin/app eval MyApp.Release.migrate && exec bin/app start"]

Without this You’ll not be able to use Ctrl+C to kill the process when you start the app with docker run.

In my case I’ve create a new shell file in my release folder called migrate_and_server with this:

cd -P -- "$(dirname -- "$0")"
./app eval MyApp.Release.migrate && PHX_SERVER=true exec ./app start

Or you can just use this way:

cd -P -- "$(dirname -- "$0")"
./app eval MyApp.Release.migrate && exec ./server

Would be Googler’s heres for Phoenix 1.7.0 and above.

I ran mix phx.gen.release --docker and it generated the Dockerfile for me.

It also generated a migrate and server file.

in rel/overlays/bin/server...

cd -P -- "$(dirname -- "$0")"
PHX_SERVER=true exec ./my_app start

in rel/overlays/bin/migrate...

cd -P -- "$(dirname -- "$0")"
exec ./my_app eval MyApp.Release.migrate

At the end of the generated Dockerfile all I needed to do was:

# BAD: 
CMD ["/app/bin/server"]

CMD ["sh", "-c", "/app/bin/migrate && /app/bin/server"]

Now if my migrations don’t run the server doesn’t start which is what I want. Hope this helps.


This is a good default, and if you want you can always manage migrations on the server itself.

BTW the release for docker is amazing, finally I dont have to copy docker scripts from other projects, it’s interesting how projects with js are hadled though.

Note that if you do it like this, sh will receive the SIGTERM signal when the docker container is shut down, but it will not pass it to the application, which means after a grace period, a SIGKILL signal is issued and all children will be terminated abruptly. Your application will not be able to shut down gracefully.

It’s better to define a separate that runs both commands, or add another overlay as mentioned that runs the migrations before it starts the server.


To test whether the SIGTERM is received by the application:

# build Docker image
docker build . --tag shutdowntest

# run container
docker run shutdowntest

# find container ID
docker ps | grep shutdowntest

# kill container with SIGTERM
docker kill --signal=SIGTERM <container-id>

If the signal is received, you should see the log message [notice] SIGTERM received - shutting down.


In a previous project I used something like this:

# Docker entrypoint script.

# Uncomment these for help with debugging

# Wait until Postgres is ready
  echo "$(date) - waiting for database to start"
  sleep 2

bin/my_app eval "MyApp.Repo.migrate"
bin/my_app start

where the MyApp.Repo.migrate function is basically identical to what mix phx.gen.release --docker creates.

Then the dockerfiles runner container (Alpine in my case)

# Add pg tools to be able to poll DB state from script
RUN apk add postgresql-client
COPY /app/
RUN chmod +x /app/
CMD ["sh", "/app/"]

You can decide not to check for the DB to be ready, then no postgres tools are required in the runner container of your app, but you might see some ugly error messages and crashes/restart until the database is ready.

The COPY command has a --chmod option btw.

1 Like

Really helpful thread. Thanks, everyone.

I have a slight issue/annoyance…

The AC.Release module has been generated by mix phx.gen.release --docker to facilitate running of migrations in the production Docker environment.

AC.Release module
defmodule AC.Release do
  @moduledoc """
  Used for executing DB release tasks when run in production without Mix
  @app :ac

  def migrate do

    for repo <- repos() do
      {:ok, _, _} = Ecto.Migrator.with_repo(repo, &, :up, all: true))

  def rollback(repo, version) do
    {:ok, _, _} = Ecto.Migrator.with_repo(repo, &, :down, to: version))

  defp repos do
    Application.fetch_env!(@app, :ecto_repos)

  defp load_app do

The Dockerfile CMD points to a shell script in rel/overlays:

cd -P -- "$(dirname -- "$0")"
./ac eval AC.Release.migrate && PHX_SERVER=true exec ./ac start

I’ve found that unless I add queue_target to the Repo options (currently at 2_000), I get a connection error:

** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 2505ms. This means requests are coming in and your connection pool cannot serve them fast enough.

When queue_timeout is/was not set in config, I don’t get the error when running the app, and I was able to run the migrations when connected to the app with the ./ac remote command using AC.Release.migrate() just fine.

I suppose I can simply leave queue_timeout there, but does anyone have any idea why it’s necessary when running ./ac eval AC.Release.migrate but not when running the app?

I’m running at Northflank, with a 512MB RAM PostgreSQL service, with no real app data or load. Apparently, the maximum number of concurrent connections is 64. I’ve played with the pool_size option for both the Repo config and Ecto.Migrator.with_repo/3, giving them both 20 at one point, seemingly making no difference.

I can successfully connect to the database via psql using the same database URL as that in use by the Repo config.


Thank you for this!!! This has been bothering me for a while…

As a separate approach, we use ecto_boot_migration. It does not require any supporting commands, avoids the possibility of a different environment state, and means that once the application has booted, all migrations have completed.

{:ok, _} = EctoBootMigration.migrate(:otp_app)

though, according to EctoBootMigration’s documentation, it only works with Postgres databases.