Flexible Dockerized Phoenix Deployments (1.2 & 1.3)

Hello everyone, I recently redesigned my entire deployment process for Phoenix apps based on Docker. I really like the strategy that I came up with and it’s working very well for me so far. I have a new VPS which I’ve vowed to only install Docker on, and this strategy is perfect for my goal.

In addition, my strategy is optimized for:

  • Running multiple apps on the same server
  • Being compatible with deployment of any other kind of app (not just Elixir/Phoenix)
  • Compilation completely separate from deployment so you can compile anywhere and deploy anywhere else

In the interest of helping others who might be struggling with the same thing, I’ve documented my entire process in a Gist. I would love to contribute to this wonderful community on what seems to be one of the most popular difficulties when it comes to Phoenix (deployment).

Please let me know if you have any thoughts. The post is here: Flexible Dockerized Phoenix Deployments.

34 Likes

Hey! Just wanted to drop a quick note saying thanks for the write-up :slight_smile:

I followed the guide last night and everything seemed to work fine. I’ll perform some tests and alterations and will comment back further if I find anything specific to talk about.

1 Like

Awesome! I’m Glad that you found the guide useful! I look forward to hearing your comments :slight_smile:

Just added another section on how to (kind of) hot upgrade the server container without touching the database container. This should be pretty useful to people using this kind of Docker configuration because being able to swap out containers individually when your app changes is a huge advantage.

Hey, I’ve been following this guide, but can’t make it work, I’m not using a database, just using a genserver to store data, so I skipped all the DB parts, also don’t want to use nginx. So I got to this point:

docker run --rm -it --name myapp-server -p 5000:5000 myapp-release foreground

and everything seems to be fine, I can see some logs from the app, but I can’t access from the browser, tried 0.0.0.0:5000 but can’t see anything, the only thing I added was: ENV PORT=4000 in Dockerfile.run, as I’m using phoenix 1.3, is there anything I’m missing?

the only thing I added was: ENV PORT=4000

So you are listening on 4000, but mapping -p 5000:5000 5000 to 5000? I don’t think that would work.

You can try changing either ENV PORT=4000 to ENV PORT=5000 or -p 5000:5000 to -p 4000:5000 (which would map tcp port 4000 in the container to port 5000 on the host).

Tried with that and rebuilt everything but still can’t access, is it right to try to access on my mac on 0.0.0.0:5000 ? I’ve changed the phoenix port to 5000 by adding ENV PORT=5000 and using the param -p 5000:5000 on docker run. I’m running this on my mac as a host machine.

I’ve followed the tutorial up to

docker run --rm -it --name myapp-server -p 5000:5000 myapp-release foreground

and it hasn’t mentioned that you should uncomment one of the lines below in order for “phoenix in a release” to work.

config/prod.exs

# ## Using releases
#
# If you are doing OTP releases, you need to instruct Phoenix
# to start the server for all endpoints:
#
config :phoenix, :serve_endpoints, true # <--- here
#
# Alternatively, you can configure exactly which server to
# start per endpoint:
#
#     config :test, TestWeb.Endpoint, server: true

Otherwise cowboy is not started. AFAIK it is not uncommented by default so as not to start cowboy on any other task like phx.digest.

Maybe that’ll help you. Don’t forget to re./build.sh the release.


EDIT: server: true is mentioned later in section 7. :+1:

2 Likes

Thanks for putting this guide up. I have some questions about other options in the release building process, and the docker-compose setup.

  1. Have you looked at generating the release tarball using multi-stage builds rather than your current build script?

  2. Shouldn’t the db service in the docker-compose be using a volume for the data in case the container dies?

2 Likes

If accessing it locally you instead want http://127.0.0.1:5000 for note.

Thanks for reading! That should work. If you have ENV PORT=5000 in your Dockerfile.run and you are running the container with -p 5000:5000 you should be able to access it from your host machine on localhost:5000 or 127.0.0.1:5000.

I would make sure that you completely rebuild everything by deleting all of the images first just so you know Docker isn’t using any cached stages. You can do this with docker rmi myapp-build myapp-release.

However, I think at that stage in the guide you still won’t be able to run the server because you haven’t changed the config which is not explained until step 7. I will try and update the guide accordingly so that running things manually comes after that.

Thanks for the comment! I actually looked into both of those things and I’ll try to address both of them.

  1. Multi stage builds are great when you need to run two different containers like what is being shown in this guide (the first having Elixir and all of those goodies installed for building the release and the second just having Erlang installed). However, a major goal of this deployment strategy was to be able to compile releases on any machine, whether it is your development machine, a staging machine, or any other machine. This way, you can compile the release in the first stage, and the copy it to your deployment machine to actually run the release. This is intentional as I have a lot of low cost servers which can’t handle the actual compilation of the release, so I must do that first stage elsewhere. If you are sure that you wanted to do both steps on your deploy machine, then multi-stage builds would be perfect for this! This just isn’t a goal of my strategy.

  2. Yes. The only reason that I didn’t include this in my guide is that most of my apps are seeded with data at startup, so they don’t “lose” any data if the database goes down. Therefore, I don’t really have much data to be lost so I didn’t consider it a part of my strategy. However, I’ve been thinking about this and in order for this strategy to be applicable to more kinds of apps I definitely need to explain how to do this. For that reason, I totally agree with you and I plan on updating the guide soon with another section on how to do that!

Awesome suggestions :smile:

They can also serve as a cache for dependencies, compiled PLT for dialyzer (for dependencies) and probably some other things that don’t change often too.

1 Like

After deleting all the images I had (had quite a few) and doing everything from scratch it worked, thanks!

1 Like

Nice work,
Please keep us updated

1 Like

At which point would it be best to add a phase to generate a TLS cert from letsencrypt? Letsencrypt requires you to write to a “well-known” dir(and certbot will try to read from it) to validate your server.

This is really hard to do with Docker because the file system is not writable.

Hi, I would say that docker is for developers, not for production.
If you want to put on production the options are Docker Swarm or K8s.
I saw one blog post for Docker Swarm

@maz maybe you can put a reverse proxy like haproxy (or even nginx, but I consider nginx to be more of a web server than a proxy) in front of your containers and terminate tls / generate certs there? Or if you decide to stick to erlang/elixir, use a post-start hook in distillery?

i would also manage tls cert stuff at the load balancer/ reverse proxy level

Yeah this makes a lot of sense, to put the cert dependency on the proxy.

For instance, if one decides to use that key to generate JWTs, they can use the proxy as a centrally-located source. I’m speaking theoreticallly here, if anyone can see that as a negative, would like to hear a rationale.