When we were setting up Edeliver + Docker for a couple of internal tools, we wanted to still keep things very streamlined; basically you shouldn’t “need” to know that a Docker instance was involved, whether it was started or not, etc. We’re also commonly running SSH servers on our dev machines, so we can’t map the port directly.
Our release process is basically:
# Build it and deploy to staging:
./build_release.sh
mix edeliver deploy release to staging --version=THE_VERSION --start-deploy
# Test stuff.. and then:
mix edeliver deploy release to production --version=THE_VERSION --start-deploy
Release builds
For building releases, we have a simple script - build_release.sh
- that basically just builds the Docker image if needed, then starts a container, runs the build, and exits:
#!/bin/bash
set -u
set -e
set -o pipefail
if [ -f .build_release.lock ]; then
echo "** CLEANING UP EXISTING BUILD HOST"
docker rm --force $(cat .build_release.lock)
fi
echo "** STARTING UP BUILD HOST"
BUILDHOST_IMAGE=$(docker build -q -t edeliver_buildhost .)
BUILDHOST_CONTAINER=$(docker run -d -P edeliver_buildhost)
echo $BUILDHOST_CONTAINER > .build_release.lock
export EDELIVER_BUILDHOST=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' $BUILDHOST_CONTAINER)
echo "** SETTING UP AUTHORIZED KEYS"
ssh-keygen -f "$HOME/.ssh/known_hosts" -R $EDELIVER_BUILDHOST
ssh-keyscan -t rsa,dsa $EDELIVER_BUILDHOST >> $HOME/.ssh/known_hosts
chmod 600 deploy/builder_key
cat $HOME/.ssh/id_rsa.pub | ssh -i deploy/builder_key builder@$EDELIVER_BUILDHOST "cat >> .ssh/authorized_keys"
echo "** POPULATING PROD SECRETS"
cat deploy/prod.secret.exs | ssh builder@$EDELIVER_BUILDHOST "cat > prod.secret.exs"
echo "** BUILDING RELEASE"
mix edeliver build release $@
echo "** CLEANING UP BUILD HOST"
docker rm --force $BUILDHOST_CONTAINER
rm -f .build_release.lock
Note that we’re setting EDELIVER_BUILDHOST
; this is paired with some Edeliver configuration in .deliver/config
that will detect if it’s running in Docker or not, since we’ve had mixed use cases:
# .deliver/config
# ...
# Allow build host override, for Docker setup
if [ -z "$EDELIVER_BUILDHOST" ]; then
BUILD_HOST="localhost"
else
BUILD_HOST=$EDELIVER_BUILDHOST
fi
BUILD_USER="builder"
BUILD_AT="/tmp/edeliver/my_app/builds"
# ...
As for the Dockerfile, it’s pretty similar to what @net linked to above, but we wanted to be specific about package versions etc… Also, this particular project was built for a Ubuntu 12.04 server:
FROM ubuntu:12.04
# Avoid error messages from apt during image build
ARG DEBIAN_FRONTEND=noninteractive
# Set the locale, otherwise elixir will complain later on
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# install prerequisites
RUN apt-get update
RUN apt-get -y -q install \
apt-transport-https \
curl
# add erlang otp repository
RUN curl -O https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
RUN dpkg -i erlang-solutions_1.0_all.deb
# add nodesource repository; distilled from setup script below..
# https://deb.nodesource.com/setup_6.x
RUN curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add -
RUN echo 'deb https://deb.nodesource.com/node_6.x precise main' > /etc/apt/sources.list.d/nodesource.list
RUN echo 'deb-src https://deb.nodesource.com/node_6.x precise main' >> /etc/apt/sources.list.d/nodesource.list
# install packages
RUN apt-get update && apt-get install -y -q \
build-essential \
elixir=1.3.4-* \
esl-erlang=1:18.3.* \
git \
inotify-tools \
nodejs \
openssh-server
# set up SSH config
RUN mkdir /var/run/sshd
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
# set up 'builder' user
RUN useradd -m -s /bin/bash builder
USER builder
WORKDIR /home/builder/
# enable password-less access using 'builder_key'
RUN mkdir .ssh && chmod 700 .ssh
RUN touch .ssh/authorized_keys && chmod 600 .ssh/authorized_keys
ADD deploy/builder_key.pub /home/builder/builder_key.pub
RUN cat builder_key.pub >> .ssh/authorized_keys
# start serving SSH connections
USER root
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Runtime configuration
Since we wanted to be able to configure the production system using environment variables, but keep a hardcoded config for dev, you’ll note in the build script above that we inject the file deploy/prod.secret.exs
, which could look something like this:
use Mix.Config
# ...
config :my_app, MyApp.Repo,
adapter: Ecto.Adapters.Postgres,
username: "${MYAPP_DB_USER}",
password: "${MYAPP_DB_PASS}",
database: "${MYAPP_DB_NAME}",
hostname: "${MYAPP_DB_HOST}"
This requires some extra setup in .deliver/config
so that it’ll actually pick up this file when building the release:
# For *Phoenix* projects, symlink prod.secret.exs to our tmp source
pre_erlang_get_and_update_deps() {
status "Linking to prod.secret.exs replacement config"
local _prod_secret_path="/home/builder/prod.secret.exs"
if [ "$TARGET_MIX_ENV" = "prod" ]; then
__sync_remote "
ln -sfn '$_prod_secret_path' '$BUILD_AT/config/prod.secret.exs'
"
fi
}
Since we want Distillery to replace the environment variables when our app is started in production, we need to make sure that the user which runs our app sets REPLACE_OS_VARS=true
in addition to the env vars above; for example, we could add this inside $HOME/.bashrc
or whatever file sets the environment in your setup:
# Elixir runtime config
export REPLACE_OS_VARS=true
export PORT=4000
export MYAPP_DB_USER=my_db_user
export MYAPP_DB_PASS=my_db_password
export MYAPP_DB_NAME=my_db_name
export MYAPP_DB_HOST=my.db.host