I would like to know what are the strong reasons to go or not to go with the Docker with Elixir/Erlang Application in Production.This is the first time I am asked for starting with the Docker in production.I worked on the production without Docker.I am an Erlang/Elixir Developer.I worked on the high traffic productions servers with millions of transactions per second which are running without Docker.I spent one day for creating and running a Elixir Application image with lots of issues with the network.I had to do lots of configurations for DNS setup etc.After that I started thinking What are the strong reasons for proceeding further.Are there any strong reasons to go or not to go with the Docker with Elixir/Erlang Applications in production.
I went through some of the reasons in the forums but still It am not convinced.All the advantages that docker is providing is already there in the Erlang VM. Could any Erlang Expert in the form please help me.
Re-using my own previous answer to this sort of question with minimal changes:
The strongest case in my eyes for containerizing your BEAM apps is if you must integrate gracefully into a larger ecosystem containing multiple other languages, runtimes, etc. within your org. If you are already deploying other code with Docker, it’s probably a foregone conclusion that your BEAM projects would too. If you are greenfield BEAM languages all the way, it’d be harder to make a strong case for Docker, rkt, etc.
I’ll also reiterate from another related thread - CPU/memory overhead of Docker are usually minimal to the point of statistical insignificance, its the networking implications that can have much more serious performance impact. Raw disk I/O can be hurt too, if you’re not using volumes when it’s appropriate.
As long as whatever VM you use can easily be packaged up and move the image around then sure, just host it straight, but the VM itself is basically docker at that point except with significant overhead compared to docker, so that seems not worth the tradeoff, I’d toss the VM and just use docker then.
In my opinion:
Bare Metal is fine if it is a single server.
Docker is best if hosting lots of services to keep them well contained and easier to migrate if you need to replace/expand.
VM is really only useful if you need kernel segmentation, like if you need to run some Windows horror or something, otherwise why bother?!?
Docker guarantees a level of certainty about the machine and also reproducibility of the system my application runs on. I can’t reach those when running on bare operating systems. at least not without learning another tools like puppet or chef.
Also when scaling later on, I will probably not come around docker anyway since most orchestration tools seem to rely on docker.
Sorry I was not clear about what I mean by “a virtual server”, I was talking about something like DigitalOcean droplet - it can be backed up, respawned on another node, up / downgraded etc and it’s the thing that makes it weird for me to keep a docker instance inside of it
Well is the beam thing the ‘only’ thing running in it? Just run it straight if so, if not (like if you are running postgresql in it too instead of on another server) then docker is probably best. Even if it is running by itself then keeping it in docker still makes it easier to migrate to another service if you don’t want DigitalOcean anymore or so. Not like it has any real overhead.
Ah, it’s one of these questions that can spawn debates going on for months :D.
The idea is really cool: you package your application and push it somewhere and it gets deployed. It’s self-contained thing you push and run on server and it just works.
The reality is often less exciting and involves pulling and pushing gigabytes of “layers”, images that are “insecure by default”, are difficult to adjust and configure and - when you’re done with them - are left without being updated for months. Because you don’t want to touch it - if it breaks you’ll waste hours fixing it. I’m waiting for some new containers technology to come to the scene soon and solve those problems. Until then, we’re sort of stuck with Docker as a lot of tools and sytems do already depend on it as a default.
At Aircloak we’ve been using Docker for a few years now, and I personally find it very helpful in a couple of ways.
First of all, Docker made it simple for us to describe our entire production through code. The complete production setup is defined in dockerfiles which is a part of our main repo. One great benefit of this is that it’s very easy to start and troubleshoot the production environment locally, regardless of the OS the developer is using. Going further, this paves way for running tests in the production-like environment. We didn’t use to test through Docker, but more recently we’re shifting toward that approach, at least for integration tests.
Relying on docker images also helped us simplify the installation. Our system is hosted by the end users. We ship them the docker image which already has everything installed and configured, so the clients don’t need to care about our installation details.
Since we deal with multiple clients, we need to be able to easily debug and test different versions of our system. Again, this is where having docker images really makes things simple for us. Want to quickly verify something on some previous version? Start the container for the desired image version, and you’re good to go. Need to develop and work with the previous version? Checkout the related branch, make your changes, build the image, and start the container.
While Docker is certainly not perfect, I still think in most cases it’s a saner choice than bare installation, because it makes many things more predictable, and helps keeping a lot of production setup in the code (which is always a good thing IMO). Personally, after working with and without Docker, I definitely prefer a docker based production setup.