What is the recommended way to deploy right now? And what is everyone’s thoughts about Docker?

In general I find that using releases (Distillery) is the best bet (I have an email course on it here http://bit.ly/elixir-release-ecourse) Once you have the release you can just SCP the tar file up to the server, untar it and run. Docker works too if you want it but may be just an additional step for no good reason

3 Likes

Any elixir folk has access to Github Action? We could share some common, basic infra configs, just configure credentials and go. It’s still old Docker and CI approach but I think it’s less involved

1 Like

The only thing that was said IIRC was that they can max out the CPU performance within a container compared to running it directly on something like a Ubuntu server on DO. I am also not that familiar with containers because I’ve for some reason steered away from them until now and this was also over a few beers so I could be hazy on the details. :rofl:

At work I’ve used Edeliver and Distillery, but my coworkers and I are in the process of moving our fleet of Elixir apps over to Docker and Kubernetes. We’re roughly 1/2 done.

It seems like people either love or hate Kubernetes. I had the exact opposite conversation with two different people at EMPEX this month. One was how they were migrating to Kubernetes and the other was migrating off. The more I went around and asked people what they thought there was no middle ground.

If you’re deploying to a single Droplet, I’m fairly certain that adding Docker is going to be more pain than gain*. I do not recommend using Docker for that situation unless you want to do it just for the learning process. And even if the learning process is important, I would still recommend getting the OTP release (Distillery) process learned without Docker first. OTP releases are complex and error-prone enough as it is.

Some folks listed some benefits that Docker may provide you, but again I wouldn’t reach for Docker until it’s clear you need those benefits.

*One thing Docker might be overall beneficial for you is if you use it to build your OTP release locally. You have to build your OTP release on the same OS distribution and version (e.g. Ubuntu 16.04) that the production machine is running. Docker gives you the ability to build your release locally in that OS distribution environment without having to manage a virtual machine. You would use the Docker container to run mix release and write the files of the release to your local file system. Then those files can be tarred up, uploaded to your production Droplet, and deployed.

I suspect what someone said to you about not being able to run mix commands was actually about OTP releases, not Docker. One feature of OTP releases is that the production machine doesn’t need to have Elixir (or Erlang) installed on it. The Erlang runtime is included in the release. When you use mix on your development machine, it’s a system utility that is associated with your system Elixir. On your production machine running an OTP release, Elixir and mix aren’t there. But, just because you’re doing an OTP release doesn’t mean you can’t install Elixir and mix on the production machine yourself. You can do that. Just watch out for diverging Elixir versions.

It wasn’t until the host of ElixirTalk was telling me about how Docker slows their app to perform more efficiently and it made me wonder what everyone else was doing. Before I was using Distillery with edeliver on a DO droplet.

What they were probably talking about is the situation where you have multiple Docker containers running on a single machine. You can constrain the containers’ resources so that one container doesn’t use more than its fair share of resources.

5 Likes

You always have the option of not including the ERTS with the release, by setting the set include_erts: false in the Distillery config.exs file and simply having Erlang installed on the machine.

2 Likes

Checkout Nanobox.io. You can deploy to DO. You could probably learn a lot by looking at their open source stuff.

I currently use docker with release builds. I’d love, love, love to do hot upgrades, but I don’t.

What I don’t really understand is why docker is so incompatible with elixir and its release style. Inside a docker container, you have a single vm running a single application on it and nothing else. It should be incredibly simple to build an upgrade from that and then apply that upgrade to the running container. It is a highly specific use case and it takes away a ton of assumptions and edge cases that normally would have to be taken into account.

I could imagine an image on hub.docker designed for this, but it is a little beyond my knowledge.

@mindreader You can hot upgrade a running container but you just have to do it from ‘within’ that running container. In addition you want to make sure you have an upgraded image for whenever it restarts.

The way described in this video seems to be more aligned with a container environment. Rather than relying on being able upgrade the application around the existing state, state is migrated from the old node to the new one. One advantage is that the deployment unit can be resurrected on an entirely different server.

The approach requires design for migrate-ability but even hot upgrades require a certain amount of forethought.

1 Like

They have been silent for months, no one knows what’s going on at Nanobox.io. There is another thread about them here, I used to be a fan but no longer.

That’s scary, I’m still using them.

I’m using docker and docker-compose with nginx as a load balancer. Within docker-compose I have a definition for two containers for the app (app_a and app_b) and one for postgres.
On the production host, I have a tiny shell-script which:

  • pulls the new images and restarts the container for
    app_a
  • when app_a is up and running, it does the same for app_b

Although I have to solve some race conditions yet, this works fine unless the upgrade needs to migrate the database in a way where the old version can’t continue working with the migrated DB-schema. In this case, I have to stop both containers before upgrading.

To be honest, I do not expect that this approach will work in heavy production conditions but at the moment our customer doesn’t work nighttimes, thus I can live with a one-minute downtime if necessary.

I appreciate the methods mentioned in this thread. I will try Distillery once I need to upgrade without any downtime. Anyhow I will stay with Docker because I feel so comfortable testing on my machine (OSX) and can be (99.99…%) sure it will work on the production machine (Linux) as well. I even sent images to the customer for previewing on their Windows machines.

Kubernetes seems to be too much effort for a tiny app like mine.

I go back and forth myself. Current setup is droplets + edeliver when at single server stage and k8s once I need more than one app server + lb.

We are deploying all our infrastructure with Docker (postgres, elixir backend, frontend SPA apps, some other services). It’s never being a pain for me to manage all the stack myself without devops team.

For a small / mid-sized project with 1-3 nodes (without scaling) I would recommend to give a try to Docker Swarm (We was managing with Kubernetes from start but moved to Swarm because vast default resource consumption/claim by Kubernetes and unnecessary over-complication for our stack requirements).

The whole process is pretty straightforward

  • commit
  • build docker image and push to registry
  • notify swarm about new image

The image building process depends on your team, it’s either some proper build with Distillery or just mix phx.server to run the application.

Also, we don’t deploy manually stage / production servers, just git push to trigger GitLab CI pipelines.

One separate issue with Docker deployments is image building time if you are using some cloud CI because inability to use cache from previous builds and docker-in-docker (dind). We solved it by hosting our own worker for GitLab builds.

This issue doesn’t relate to docker deployments. Best practice for such situations is to write migration which doesn’t break old code. You can do it many ways where one of is splitting migration to several steps.

Good point. Which leads me to the question if and how a Docker-approach can help with such a multi-step-upgrade.

Docker has nothing to do with multi-step-upgrade, it’s just a tool/platform to unify process of deploying and running software.

It can help you to unify managing process for all software (backend, frontend, storage) and isolate environment for a running program. Smooth deployments with proper failure rollbacks addresses development process (and code logic) rather than hosting platform runing the program.

A docker image is built in layers. At the point that that image is complete, all of those layers are write protected - you can’t change them. If you build your Elixir system into the image then its code cannot be changed later. You could say it’s baked into the image.

Later, when you create a container based on the image, the runtime adds a single writeable layer on top of the last layer in the image. When you do things that change the file system, those changes are made in this writable layer.

You can do hot upgrades with Elixir if you build an image, then run a container, then install the release into the running container. When you do that the Elixir/OTP application (and probably the ERTS) are written into the writable layer of the container and are fungible. But setting up your release pipeline to account for the extra step of firing up a container then installing the application into that container is added complexity. If you are running your application as a cluster of nodes - you would need to handle the install step whenever a node goes down and your orchestration system has to fire it back up. Couple that with the inherit complexity of designing your code to handle hot upgrades and it all adds up,

2 Likes