Do you usually dockerize your Phoenix?

I came across this article about dockerizing a phoenix installation:

How to Run Your Phoenix Application with Docker)

But it is from around 5 years ago. In practice do most of you run Phoenix applications inside docker?

How do you do it?

1 Like

I am a very big fan of managing dev environments in Docker. I actually don’t even have Elixir or Erlang installed locally. When I am working on a simple app, I will run tests on a local image (previously I used the hexpm images but I found there are some common additions that prevent me from needing a Dockerfile as often):

docker run -it -e MIX_HOME=/data -v "$(pwd)":/data -w /data tfwright/elixir-git:1.14 mix test

For more complicated apps, I use compose to build and run everything. Here’s an example: live_admin/docker-compose.yml at main · tfwright/live_admin · GitHub

1 Like

Phoenix has a generator that creates a pretty basic Dockerfile. I use it with some slight modifications.

I’ve also hacked together a version that allows for development but I basically never use it.


For everything in the “cloud” yes. On my loval raspberry pi I just run it directly :slight_smile:

1 Like

I like the homogeneity that targeting container orchestration gives, because I do not live in a world where one backend language is the norm. I have been a proponent of containerized deployment paradigm for around 8-9 years now. I wrote some of the earliest community coverage for the intersection of Kube / Elixir.

Container-wise, I find Kubernetes much preferable to e.g. Ansible, Nomad, ECS, Fargate, etc. for container orchestration. For me multi-host with horizontal scaling and fungible worker nodes is non-negotiable. Thus far I am never beholden to a shoestring budget so it’s pretty much always EKS + RDS, given my druthers.

If you have heard opinions that Kube obviates OTP or vice versa, IMO that’s somewhere between a misunderstanding and FUD. If you don’t believe me, maybe take the creator’s word for it.

I find people who lionize “I can run my entire app on a $10 VPS, so why on earth would I need containers” to be almost totally disconnected from my own professional values, requirements, and experiences.

In terms of Elixir-specific strategies, relatively little has changed materially about my tactics from a blog series I wrote in 2018. I write Kubernetes manifests in YAML, I use Kustomize for distinguishing logical environments like prod vs not-prod, I populate in-app config in runtime.exs from env vars or less often from ConfigMaps, and I base the OCI image loosely on the phx.gen.release --docker template someone else also mentioned.


I find it rather difficult to justify paying a VM performance + battery tax for using containers very actively in development, especially on ARM Macs.

K8s give you a lot of flexibility in terms of infrastructure, however a lot of times a single node application scaled vertically is much more easier to manage, not to mention that the performance will also be better, or at least until you reach that big application threshold.

I may be wrong and you could do as well vertical scalability with K8s easy, however from what I saw the idea is to use small and resource limited pods.

1 Like

I understand if your work requires more scale and you are used to k8s but I have found it an overkill for 98% of everything I ever did. Admittedly I very rarely worked on stuff that’s seriously huge though. That’s obviously a factor.

Distribution and orchestration are solved problems, people say. I say they are only theoretically solved and only “solved” in a very nitpicky technical sense. In practice they introduce huge overhead: raw CPU speed, need for dedicated and well-paid platform engineering teams, and others.

I’ll always prefer to buy a bare-metal server with 2TB RAM and clusters of enterprise NVMe SSDs and various redundancies compared to reaching for stuff like k8s. :person_shrugging:

I wanted to believe in k8s but it’s a Frankenstein that’s destroying people’s sanity. Whoever knows it intimately, good for them, for me though I want to get the job done this century without babysitting virtual Lovecraftian horrors until the end of time.

And don’t even get me started on YAML… if that’s the best this race can do then I’ll never miss commercial programming after I retire.


I think this problem is solved on the same scale as concurrency is solved in 95% of programming languages, primitive to say at the least.


I think that take isn’t fair either. Sure you can buy all the solutions you can pay for. But to me solutions are only half of the question, figuring out what’s broken when the solution doesn’t work for some reason is the other. And that cost only goes up the more tech you stack on top of each other.


I will never be sold on Kubernetes. Containers sure, but Kubernetes and building and operating a cloud on the cloud and the engineering team to support it. No.

Ably provide an extremely scalable realtime messaging platform (with Erlang and elixir) with some very impressive SLAs (5x9s uptime guarantee), 350M active endpoints and huge burst headroom. They have an excellent article on why they don’t use Kubernetes. It’s very sound reasoning.

Some interesting points from the article:

Ably is a public cloud customer. Our entire production environment exists on AWS and currently nowhere else. We run on EC2 instances. The total number of machines fluctuates with autoscaling throughout the day, but is always at least many thousands, across ten AWS regions. These machines do run Docker, and most of our software is deployed in containers.

On Kubernetes:

Packing servers has the minor advantage of using spare resources on existing machines instead of additional machines for small-footprint services. It also has the major disadvantage of running heterogeneous services on the same machine, competing for resources. This isn’t a new headache: cloud providers have the same problem – known as “noisy neighbors” – with virtual machines. However, cloud providers have a decade’s worth of secret sauce in their systems to mitigate this issue for their customers as much as possible. On Kubernetes, you get to solve it yourself all over.

Scaling the cluster up is relatively simple for the cluster autoscaler – “when there isn’t as much spare capacity as desired, add nodes”. Scaling down, however, gets complicated: you will likely end up with nodes that are mostly idle, but not empty. Remaining pods need to be migrated to other nodes to create an empty node before terminating that node to shrink the cluster.

The verdict on autoscaling is that it should still work similarly to how it does now, but instead of one autoscaling problem we would be solving two autoscaling problems, and they are both more complicated than the one we have now.

The previous section can be summarized as follows: we would be doing mostly the same things, but in a more complicated way.

Complexity. Oh, the complexity. To move to Kubernetes, an organization needs a full engineering team just to keep the Kubernetes clusters running, and that’s assuming a managed Kubernetes service and that they can rely on additional infrastructure engineers to maintain other supporting services on top of, well, the organization’s actual product or service.

And from another Ably article on cloud scalability and cost:

From the cloud provider’s point of view, they have the problem that when you’re not renting the machine, and even when nobody is renting the machine, it’s still there and they’re still incurring most of the cost of the machine existing. They need to adjust their pricing accordingly, so while the headline might be “You only pay for what you use”, the subtext is: “…at a rate that also pays for everything you’re not using, because we’re a business.”
Can they optimize their capacity planning so that at any point in time, very few machines are unused? Realistically, no; customers really hate it when they request a VM and are told there’s no capacity, so it’s necessary to run with very generous capacity reserves so that almost all requests fulfill instantly. If you market it as “on demand”, customers will have demands.

And this is how you get the biggest cost savings, (hint: its not Kubernetes):

What can be done instead is to encourage customers to leave their VMs running and make longer term utilization trends more predictable. That’s why the major cloud providers offer very steep discounts for long-term use. 60 to 70 percent discounts off on-demand pricing are easily available, and with very large and very long term contracts, even 90% are possible.

So if you’re hobby scale use VPS plans until you outgrow them, if you’re not google scale use don’t pretend to be google and drink the Kubernetes coolaid. Commit to a base capacity and keep the architecture simple, not dual layers of “cloud on cloud” management and save a huge margin over anything else you can possibly do on the operations side.