Could use some feedback on this multistage dockerfile (1st elixir/phoenix deployment)

Hi,

I’m currently building an app prototype in Phoenix 1.5 and I want to run it in a local kubernetes cluster (k3d multi node cluster with prometheus/grafana/loki) to try some different scenarios. Considering this is the first time I’m deploying a phoenix app I’d love some feedback.

I have enabled docker buildkit. This makes the builds a lot faster considering docker then does a lot of things in parallel. It also makes it possible to use the cache mount (with the experimental syntax enabled).

I also followed the Phoenix releases guide. I can pass in the SECRET_KEY_BASE when I run the image. I also set the option for the release to only create unix executables.

It’s basically a fresh Phoenix 1.5 install I tested this on. The final image is just 22.9MB. It seems to run perfectly fine.

Is there anything else I have to pay attention to? Other packages I might need at runtime?

# syntax=docker/dockerfile:experimental
FROM hexpm/elixir:1.10.2-erlang-22.3.2-alpine-3.11.3 AS deps
WORKDIR /app

COPY lib ./lib
COPY config ./config
COPY mix.exs .
COPY mix.lock .

ENV MIX_ENV prod

RUN mix do \
      local.rebar --force,\
      local.hex --force,\
      deps.get --only prod


# Build Phoenix assets
# Using stretch for now because it includes Python
# Otherwise you get errors, could use a smaller image though
FROM node:13.13.0-stretch AS assets
WORKDIR /app

COPY assets/package*.json ./assets/
COPY --from=deps /app/deps ./deps
RUN --mount=type=cache,target=/root/.npm,sharing=locked \
      npm --prefer-offline --no-audit --progress=false \
      --loglevel=error --prefix ./assets ci

COPY assets/ ./assets

RUN npm run --prefix ./assets deploy


# Phoenix digest
FROM deps AS digest
COPY --from=assets /app/priv ./priv
RUN mix phx.digest


# Create release
FROM digest AS release
ENV MIX_ENV prod
RUN mix do compile, release


# Release
FROM alpine:3.11.3 as deploy
WORKDIR /app

RUN apk add --no-cache openssl ncurses-libs

COPY --from=release /app/_build ./_build

ENV SECRET_KEY_BASE=

EXPOSE 4000

ENTRYPOINT ["/app/_build/prod/rel/your_app/bin/your_app"]
CMD ["start"]
7 Likes

You are using the RUN mount cache option for npm but not for mix or apk. You can improve those the same as npm by using local caches of their packages.

Erlang and not Elixir but I discuss hex package caching and Alpine caching here: https://adoptingerlang.org/docs/production/docker/#efficient-caching

2 Likes

Wow, that’s impressive. You have a lot of good ideas already tried in the dockerfile, e.g. npm ci,buildkit etc.

There are some subtle things I think can make it better:

In this stage, lib is not necessary for mix deps.get. This will save you a lot of time if you just modify codes in your side without touching any dependencies / configurations.

In my builds, I copy them to some directory unrelated to the app, for example /app. This can simplify your entrypoint path and make the Dockerfile more reusable.

I would put this secret variable in a runtime configuration, e.g. a config/release.exs, without exposing it at build-time. For K8S, it’s usually stored in a cluster secret as origin.
Sorry, please forget it, I made a mistake, it’s a runtime variable here.

1 Like

One minor improvement you could do is:

Replace:

COPY mix.exs .
COPY mix.lock .

With:

COPY mix* ./

That should reduce your total layers by 1 and very slightly improve build speeds (probably won’t be noticeable but it’s still a small win).

@tristan thanks! Totally forgot about doing that for those as well. Unfortunately have had way too much node/angular experience lately. You definitely learn how to optimize though.

1 Like

@qhwa ah nice, thx. I’ll check about moving the lib copy to another stage.

You can definitely do a lot with buildkit. For an angular build pipeline we use a multistage build as well. If you do tests / linter / compile checks in different stages that don’t depend on each other they will run in parallel. You just have to make sure another stage copies some output from those stages (even if those are empty files), otherwise buildkit just says “ah, you’re not using any output, I’ll skip the stages completely”.

@cnck1387 true. Thought about checking if those were the only 2 mix files but forgot. The difference will probably be tiny indeed.

Lots of great stuff here, thanks!

I’d be grateful if you could post the “finished” version here later :slight_smile:

When you are done … you could think of sharing with the community Adding Solid Docker Compose Recipes. :slight_smile:

@tmjoen / @JovaniPink sure thing! I found some other helpful info as well and will be trying to implement the changes this weekend. I’ll definitely post the final result.

2 Likes

Here’s the version I ended up with. I tried to document all the steps.

As a reference:

Intel NUC Skull Canyon (quadcore core i7 mobile processor)

Phoenix 1.5 project with liveview enabled, no ecto.
Tailwind CSS and prometheus telemetry reporter added.

Initial build time: 1m17sec
Rebuild after a code change with all layers cached: 5sec (tested by changing one of the metrics in telemetry.ex)

Final image size: 19.7Mb (19.1 without openssl)

edit: @JovaniPink you’re free to use this example any way you like. I won’t be using docker compose myself. I prefer to use a local kubernetes cluster to get more kubernetes experience (trying k3d at the moment). But I’ll try to make notes about the things I run into and share them later on.

# syntax=docker/dockerfile:experimental

# This experimental syntax needs to be enabled for --mount=type=cache to work
#
# It's a buildkit feature (see https://docs.docker.com/develop/develop-images/build_enhancements/)
#
# Buildkit basically creates a dependency tree which enables it to execute quite a few stages
# and other processes in parallel
#
# It will also skip stages completely if buildkit determines they aren't needed
#
# You might wonder why this is useful
#
# Let's say you add a test stage to your dockerfile which depends on your build stage
# By default the last stage in your Dockerfile is the build target (docker build --target <stage> .)
# Now if I run a docker build with --target test, it will ONLY execute the steps needed for that stage
# and others are skipped
#
# We need to be up-to-date with the master branch before we can merge so in CI we run the
# tests in jenkins only for the feature branches
# These tests are skipped in master yet we can still use the same multistage dockerfile,
# only the target is different
#
# Parts of the stage dependencies will still be the same like the deps stage in this file
# This means it will still use the docker cache created when running the test stage when you
# build the actual release
#
# I also recommend creating a .dockerignore file (especially for local use) to make sure
# the docker context stays as small as possible and you don't copy files into your stages
# that you don't need/want
#
# My current .dockerignore contents:
#
# .elixir_ls
# .git
# assets/node_modules
# deps
# _build

# Dependency stage
FROM hexpm/elixir:1.10.2-erlang-22.3.2-alpine-3.11.3 AS deps

# In case you're behind a proxy
ARG http_proxy
ARG https_proxy=$http_proxy

WORKDIR /app

COPY config ./config
COPY mix.exs mix.lock ./

ENV MIX_ENV prod

# Use the hex and rebar cache directories as cache mounts
RUN --mount=type=cache,target=~/.hex/packages/hexpm,sharing=locked \
    --mount=type=cache,target=~/.cache/rebar3,sharing=locked \
      mix do \
      local.rebar --force,\
      local.hex --force,\
      deps.get --only prod


# Build Phoenix assets
# Using stretch for now because it includes Python
# Otherwise you get errors, could use a smaller image though
FROM node:13.13.0-stretch AS assets
WORKDIR /app/assets

COPY --from=deps /app/deps /app/deps/
COPY assets/package.json assets/package-lock.json ./
# Use the npm cache directory as a cache mount
RUN --mount=type=cache,target=~/.npm,sharing=locked \
      npm --prefer-offline --no-audit --progress=false \
      --loglevel=error ci

COPY assets/ ./

RUN npm run deploy


# Create Phoenix digest
FROM deps AS digest
COPY --from=assets /app/priv ./priv
RUN mix phx.digest


# Create release
#
# phx.digest also does a partial compile
# I tested doing the "mix do compile, phx.digest, release" in a single stage
# This made things quite a bit worse
# It meant it would do a complete recompile even if just a single line of code changed
# With the stages separated most of the compilation is cached
#
# On my machine (quadcore mobile i7 from a few years ago) it only takes around 5 seconds
# after I change a single line of code to build a new image because almost everything is cached
# Initial build time (including pulling all images which depends on your network speed) it takes
# around 1 minute and 20 seconds
FROM digest AS release
ENV MIX_ENV prod
COPY lib ./lib
RUN mix do compile, release


# Create the actual image that will be deployed
FROM alpine:3.11.3 as deploy

# openssl might not be needed if ssl is handled outside the application (ex. kubernetes ingress)
# It adds around 0.6Mb to the image size
# I'm thinking about creating multiple nodes and having them communicate between each other through ssl
# so I leave it for now
# If anyone knows when to include it or when not to, please share :)
RUN --mount=type=cache,target=/var/cache/apk,sharing=locked \
      apk add openssl ncurses-libs

# Don't run the app as root
USER nobody

# Set WORKDIR after setting user to nobody so it automatically gets the right permissions
# When the app starts it will need to be able to create a tmp directory in /app
WORKDIR /app

# Include chown to make sure the files have the correct permissions
# You might think you could do a "RUN chown -R nobody: /app" after the copy
# DON'T do this, it will add an extra layer which adds about 10Mb to the image
# Considering an image for a new phoenix app ends up around 20Mb that's a huge difference
COPY --from=release --chown=nobody: /app/_build/prod/rel/phoenix ./

# SECRET_KEY_BASE will be provided when running the application
ENV HOME=/app \
    SECRET_KEY_BASE=

EXPOSE 4000

# To test the image locally:
# docker build -t phoenix .
# docker run -p 4000:4000 --env SECRET_KEY_BASE="<your secret key base>" phoenix
ENTRYPOINT ["bin/phoenix"]
CMD ["start"]
12 Likes

You may want to chmod +x the entrypoint script so it’s guaranteed to be executable in case you end up in a situation where the file system building the image loses it.

1 Like

@cnck1387 ah ok, thx! Nice to know. Never ran into an issue like that before.

I have on occasion. In one of my earlier Docker courses I didn’t do that step and I distributed a zip file to folks who signed up. Certain unzip utilities stripped the executable bit from all files, so a number of people ended up running into that problem.

1 Like

Annoying when things like that happen. Have had my fair share of trouble with docker and docker-compose, especially on windows. Only using linux or linux vm’s now. Saves me a lot of headaches.

Great work. Thank you for sharing.

@cnck1387 was tweaking the dockerfile a little more and was thinking about what you said about chmod. Did you mean providing the people with the source code and have them build the docker image locally? If so then I understand the issue with losing the execute bit and I’ve had those issues before. In this particular case the release (including the executable) gets built inside the dockerfile though and doesn’t get copied from outside.

Yeah, that’s what I did in that case.

How does the bin/ directory make its way into the image?

Ah, then I know all about the issue you ran into. I run into it with git at times where I have to add the execute bit later on. I was confused at first considering it’s the mix release command that actually creates/bundles all the files that end up in the app folder. They are created in the release stage inside the dockerfile and are copied into the final image with COPY --from=release. So nothing from the outside actually gets copied into the final image.

Good job! Thanks to you, I’m improving my old projects, inspired by this awesome Dockerfile.

I think there is still an opportunity to make it even better. I mean we can add a compile stage, where we run mix compile. This stage will be running in parallel with npm run deploy, and makes next mix phx.digest and mix release very fast.

Here’s what I’ve got so far, and it works pretty well~

A lot of new technologies learned from this thread. Thank you!

1 Like

It can definitely be improved :slight_smile:. Still tweaking it myself. I removed the node image and just added node and such to the elixir base image. Saves another image download. In other frontend projects I want a bit more control over the used node version but here it doesn’t matter as much. I also created a separate RUN command for installing hex/rebar. Then they won’t be reinstalled every time a dependency changes.

I checked your dockerfile and there are a few things I noticed:

It’s recommended to use COPY instead of ADD unless you need specific functionality that ADD provides.

ADD . . will copy the whole context. This means if a single file changes, even if you don’t care about it in that stage, the layer cache will be invalidated and everything that depends on it will have to run again. Try to copy only the files you need for that stage.

Even though some stages can run in parallel like you mentioned, by doing so you’re invalidating the docker layer cache more frequently. You’re doing an ADD . . before the compile. Even if you only copy the lib/deps folders it would still do a full compile if you change a single line. So compilation that’s only needed for phx.digest would run again as well.

The trick with dockerfiles is not to optimise for a single build but making sure you’re invalidating the layer cache as little as possible. And making sure the final image size is as small as possible of course. Try changing a single elixir file and run the build again and then do the same when you change a css file for instance. Check the output to see what’s being cached and what isn’t. Right now a rebuild after changing a single line of elixir code takes 5 seconds in my project. I have a feeling it will take quite a bit longer in your case.

1 Like