I managed to deploy my Phoenix App on an external virtual server using Docker. The only thing missing is the call of “migrate” to run the migrations before the server starts.
I started using the official Phoenix Dockerfile and added a docker-compose.yml with an additional Postgres container. The server starts without migrations because of the “CMD [”/app/bin/server"]" statement and the end of the Dockerfile.
I tried to add “RUN /app/bin/migrate” just before, but then I get an error that the environment variable DATABASE_URL is missing. But actually it’s set in the docker-compose.yml file for the Phoenix container. It’s obviously set correctly as the server starts with no issues.
What is the correct way to run the migrations?
Btw. I would also like to run seeds.ex, but I guess I have to add this in the release.ex module.
You should remember that you don’t have mix after you created a release version, instead you can use Ecto.Migrator for those tasks.
I’ve tried several methods of doing migrations, and the best method on running production environments is to run migration via remote console, you can do this by starting a bash console inside of your container.
Another method is to create something like a GenServer for migration and add it to your application supervision tree, this way your migrations will run automatically everytime the server is started.
If you don’t want your system to run while migration is in progress, the best bet is to create a script that will execute a rpc call to your migration function, then start the server.
this will not work, you are trying to execute migrations at the moment you build the container, you have to execute them when the container is started. This means that your migration script should be in last CMD.
I haven’t figured out how to change my Dockerfile to run migrations automatically after the firing up the server using “docker-compose up”. Anyway, this isn’t so important as I can log into the container and start the “migrate” script manually.
But I still haven’t found a solution to run seed.exs which contains all the master data. I have found threads where people are talking about using Code.eval_file. But the script stops when trying to build the path to the seed.exs file with an error saying that module mix is not available. I have this in my release module:
def seed do
load_app()
priv_dir = "#{:code.priv_dir(:ivv_server)}"
seed_file = Path.join([priv_dir, "repo/seeds.exs"])
Code.eval_file(seed_file)
end
If I don’t find a solution I will have to create an sql_loader script. Does anybody have similar requirements?
I haven’t checked but at least if you want docker run ... to “automatically” run a command then you do it e.g. like this:
# ...
ENV MIX_ENV=prod
RUN mix deps.get
RUN mix deps.compile
RUN mix compile
RUN mix release
CMD _build/prod/rel/<project>/bin/<project> eval "YourMigratorToolsModule.migrate()"
docker build ... will build everything except the final CMD whereas docker run ... will only run the last CMD, if memory serves.
But I am not sure if that even relates to doing docker-compose up. Maybe it doesn’t, haven’t checked.
This is a good default, and if you want you can always manage migrations on the server itself.
BTW the release for docker is amazing, finally I dont have to copy docker scripts from other projects, it’s interesting how projects with js are hadled though.
Note that if you do it like this, sh will receive the SIGTERM signal when the docker container is shut down, but it will not pass it to the application, which means after a grace period, a SIGKILL signal is issued and all children will be terminated abruptly. Your application will not be able to shut down gracefully.
It’s better to define a separate entrypoint.sh that runs both commands, or add another overlay as mentioned that runs the migrations before it starts the server.
#!/bin/bash
# Docker entrypoint script.
# Uncomment these for help with debugging
# echo "POSTGRES_USERNAME ${POSTGRES_USERNAME}"
# echo "POSTGRES_HOST ${POSTGRES_HOST}"
# echo "POSTGRES_PORT ${POSTGRES_PORT}"
# Wait until Postgres is ready
until pg_isready -U ${POSTGRES_USERNAME} -h ${POSTGRES_HOST} -p ${POSTGRES_PORT}
do
echo "$(date) - waiting for database to start"
sleep 2
done
bin/my_app eval "MyApp.Repo.migrate"
bin/my_app start
where the MyApp.Repo.migrate function is basically identical to what mix phx.gen.release --docker creates.
Then the dockerfiles runner container (Alpine in my case)
# Add pg tools to be able to poll DB state from script
RUN apk add postgresql-client
...
COPY entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
CMD ["sh", "/app/entrypoint.sh"]
You can decide not to check for the DB to be ready, then no postgres tools are required in the runner container of your app, but you might see some ugly error messages and crashes/restart until the database is ready.