Should MIX_ENV be supplied as a Docker build-arg?

I’m working on dockerizing a Phoenix app and I’m wondering if the MIX_ENV needs to be set as a Docker build-arg. Because environment variables are read at compile time, I think your Docker image would have to know about which MIX_ENV it should be using, right? In other words, when you build a Dockerfile for an Elixir/Phoenix app, you are building an environment-specific image (e.g. the prod build, the test build, etc).

I’m looking over this blog post and the Dockerfile it uses – hard-codes the MIX_ENV right at the top of the file, but it seems to me that this could be replaced by a build-arg.

How are others handling this? A lot of CI could be simplified if you could build once and choose the environment later…

I have totally different dockerfiles for prod vs test because they have different dependencies. As such I can just hard code the env at the top.

If you don’t have different dependencies, and you just need different environment specific settings you can just use fetch_env in your releases.exs file.

This is a really good tutorial I found useful:

And a shameless plug, I wrote a post (largely based on the above) for creating a release / docker image for use in a GitLab pipeline:

1 Like

We hardcode MIX_ENV in our Dockerfiles since prod and test builds are different:

  • the prod image is a multi-stage build and uses mix release and minimal set of permissions to run the app
  • the test image tries to mimic the prod image, but just compiles the app and adds write permissions required by mix test
1 Like

Interesting… I’m surprised a bit by how many people I’ve heard from who use different Dockerfiles for prod as they do for test runs. Are people worried about them specifying different dependencies or compiling different libraries? I can think of a few times where this type of thing bit us, e.g.

  • Some low-level SSL encryption stuff was compiled in one environment, but not in another. All requests to https API endpoints barfed.
  • Some developers wrapped bits of code in conditionals that tested for environment

Having a single Dockerfile for both prod and test would alleviate one of those problems… I guess I would prefer my test runs to test things in a way that mimics prod as closely as possible, and I always felt that a separate Dockerfile could potentially cause problems in that regard…

Well, usually you build the release using the docker file which you run through the resulting images entry point, there is no room for running tests in this scenario, as you can’t run the tests from the release.

To run the tests I have a generic elixir image that just runs the tests.

Also for the latter I usually do not care if the image is perhaps 50 or 100 mb more in its size. The testrunners will probably cache it for a long time anyway.

The image under deploy though also gets regular updates of the base image, it gets probably redownload on deploy without any cached layers beeing reusable, also to be able to roll back we have a policy to keep at least the last 10 to 20 versions or sometimes even of the last year or more (depends on customer). So here every byte of the final image matters.

This is why we use different images. Because they serve different purposes under different environments.

We do not use elixir in my company though, but the flow is the same across about all languages used by us and our customers.

Also I have to say, dockerfiles for test usually get handwaved in the review, such that testing is up and running quickly again, but those for production require review by certain people for security audits and to make sure they are in size boundaries, not keeping unnecessary libraries etc…

This is quite often the case for compiled languages, where in production you’ll be running of a minimal runtime, while in development you need a full blown sdk for doing things the runtime never needs.