Docker instead of an Elixir version manager (split thread)

I use 100% Docker workflow for development and Elixir is not an exception :wink:

Currently my Elixir Docker Stack is a Work in Progress but is already usable

With this approach I don’t need to install any Erlang or third part libraries to be able to run different versions of Erlang and Elixir :slight_smile:


I use asdf for version manager. With this approach I don’t need to install any Docker or any other virtualization to be able to run different versions of Erlang and Elixir

PS. Sorry, couldn’t help myself :blush:


But you need to install asdf :wink:

And I use Docker for other development task and to run Sublime Text 3, Vscode, Postman, Mysql Workbench and any other tool I may need :slight_smile:

NOTE: Docker is not a virtualisation, at least in the sense of VM’s.

1 Like

How do you use GUI tools using Docker? And the more important question, have you paid for a Sublime Text 3 license?!! :stuck_out_tongue:

Docker needs VM with host linux on an OSX and Windows, but other than You are right.

How does it differ from “But you need to install docker”? :wink:

And I do use asdf for Python, Ruby and If I need to touch something in GO it’s installed through asdf too. I only use docker to create an Erlang release for production/staging servers, because we use there Jesse and it’s not compatible with my dev machine, though I probably should use proper VM with the exact copy of production/staging environment, because docker once failed me miserably, though but I was able to recover fairly quickly. I really fail what docker advantages are (in development, or in general) over “native” solutions. Docker doesn’t give you even proper isolation… anyway this it totally off topic.


You just expose the X sockets through it. I’ve done it in the past as a test but I don’t know the steps off hand as I don’t use docker for that stuff.

1 Like

My Operating system is free o any programming language and all is versions, plus databases, plus many other software… So I can play with many versions of a Software as I want without messing with my Operating System :wink:

Another thing is switching from Linux distro or to is new version is not much overhead for me once all my tooling is inside Docker Containers.

Advantages of Docker for development is huge, once you can guarantee parity for the development environment of all developers in the Team, plus if used in production than 100% parity between development and production can be achieved.


Could you tell us more about it? I’m using docker for building releases for linux …

The thing with docker images is, that they are build from docker files, that can be build from other docker files with FROM directive like this: FROM elixir:1.6.1, and it’s recurrent. With ruby, or elixir dependencies you have mix and Gemfile files listing your dependencies, and corresponding mix.lock and Gemfile.lock files that list all dependencies, the one you explicitly want and all transient ones that are dependencies of your dependencies, so you have always full control of what you really are using. Docker does not list anywhere (at least I didn’t find it) what images are going to be used when you run your docker image.

Someone somewhere down the line changed dependencies of it’s dockerfile to use base image with newer version of some libraries I was during build in docker process. And although elixir release built like it always built it suddenly stopped working in production env due to mismatch of library versions (and paths IIRC). It took me sometimes to find what was the cause of the failure, but eventually I did. No instead of using docker images like any other sane dependency management tool would do, I have to vendor all the docker files and images, just like you do when you want keep sanity with this idiotic (no pun intended) dependency-no-mangement in Golang.

Maybe there is better way to keep docker image dependencies, but I did not find any. Maybe I’m doing it wrong, but I don’t want to waste any more energy on docker.

Anyway, the lesson is that you can’t trust dockerfiles dependencies to be reproducible.


Yes you can and you should… that is all the point of using Docker, to have reproducible builds.

You need to use the Docker Image Digest of an image instead of using the Docker Image Tag.


FROM ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2

CMD bash
╭─exadra37@laptop ~/test
╰─➤  sudo docker build -t test .
[sudo] password for exadra37:
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2: Pulling from library/ubuntu
5a132a7e7af1: Pull complete
fd2731e4c50c: Pull complete
28a2f68d1120: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
Status: Downloaded newer image for ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
 ---> 07c86167cdc4
Step 2/2 : CMD bash
 ---> Running in f49d4193090b
 ---> 7422164e3384
Removing intermediate container f49d4193090b
Successfully built 7422164e3384
Successfully tagged test:latest
╭─exadra37@laptop ~/test
╰─➤  sudo docker run --rm -it test
root@22f3f656e58e:/# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 14.04.4 LTS
Release:	14.04
Codename:	trusty

As you can see this build is immutable and reproducible in many years ahead in the future :slight_smile:

More reading:

For security reasons I don’t trust in third part images, just the official ones, thus I always build my ones from officials ones, thus never had issues like the one you described.

Plus building images in top of a chain of other images is asking for trouble, is like using inheritance in OOP languages, that I avoid at all costs.


The problem then of course is, how do you invoke that dockerized IDE? With a very long and complex command line, so the obvious next step - wrapper scripts! Now you “just” install wrapper scripts for everything and that is somehow easier than installing the stuff straight away on your machine ;-).

I really don’t get it. Especially not with a lot of people still on MacOS, where docker<->host file sharing performance is abysmal. At work, some people tried to work-around by having a separate docker-sync container which somehow is faster, and they managed to wrap this:

cd <source root>; asdf install && npm install && npx ember server

into a Dockerfile, two docker-compose files, and a couple of hundred lines of shellscripting to have a user-friendly wrapper around the various invocations for Ember (with, without SSL, as a test server, …).

I’m just baffled :wink:

(I use Chef to install the basics - git, asdf-vm, Emacs, some dotfiles and a global ~/.tool-versions - it runs asdf install initially which takes ages, especially on a fresh Pi install, but then everything is just there).


The worse thing is that all that gets statically linked into your final container image.

There used to be a time that when you had a very severe C library bug, all you needed to do was update it on your hosts, reboot, done. People in the '80s and '90s worked really hard to add dynamic linking, etcetera, to Unix. Precisely to enable this level of system management.

Now, we’re twenty years later and people actively promote containerized microservices which are exactly the sort of statically linked executables that were so hard to manage and prompted dynamic linking.

Now, you run your 50-microservice PaaS with a couple of thousand instances and the CVE lands on glibc - everybody has to rebuild and redeploy everything.

It’s beyond stupid, frankly, and I’m waiting for the day that Docker will sport “dynamic container linking”, so a perfectly fine wheel gets reinvented in the most horrendously complex way imaginable, because that’s how Sillicon Valley (where, alas, too much of our tooling gets built) rolls.


GUI tools using docker:


Docker + Compose in development was a game changer for us. It used to take, at minimum, an entire day for new hires to get all the dependencies they needed to get a project working. The more junior the hire the longer it took. Now it only takes 15 min. Most people forget they’re in a container after a while so the experience is mostly seamless. This setup also makes jumping around to different projects a breeze.

I know this is one of those hot button topics for developers. I’ve been called many things for advocating this setup. However, I can’t argue with results. It’s worked wonders for us.

1 Like

How big is your team? I think that makes a big difference on if this investment in a standardized development infrastructure and tooling makes sense.

Took me zero in a previous job where we used Boxen to configure development laptops. After the initial run, all the code was checked out, compiled, and ready to roll. Automating this on Linux, of course, would’ve been even easier, probably just a single apt package.

I do actually get the whole docker-compose thing for development - almost more than for deployment - but there are too many open problems. Like people insisting using MacOS for development, where Docker is superslow (if you want to, which I hope you do, have your code still sit on your host’s filesystem so you can use all your tools against it), and package management various between non-existent and broken. And then there’s the whole thing of having to wrap very complicated container invocations in scripts, although if you can use the docker-compose hammer everything becomes a bit simpler.

Where I like docker-compose under development the most is if a system has a ton of dependencies that must be there to do development. Our Rails app wants MySQL, Memcached, and a handful of service stubs, and wrapping that in a single docker-compose file makes sense. Stuffing the Rails app in there as well? Not so much, in my experience, it is either too slow (MacOS) or not needed (Linux).

To me, it almost smells like Docker and compose are seen as silver bullets, but to me they’re just easier but not necessarily better tools and if you couldn’t do this before, you probably haven’t looked hard enough; that’s probably my biggest gripe, that when people discover the Docker hammer they refuse to acknowledge that there’s anything but nails.

[And yes, I might be seen as ranting on MacOS. Developers choosing MacOS seems to me a bit reason that things are hard to setup - no package management, a whimsical vendor that constantly breaks things, a reasonable idea - brew - that I think got out of hand and also constantly breaks things, and superslow Docker integration. If that is your chosen development platform, you need indeed to reach for work-arounds to have any chance at a decently reproducible development environment. Interestingly enough, the work-around is called “we use Linux for development but we hide it in containers so that we don’t have to admit it” or something like that. I just took the logical conclusion and installed Linux on my Macbooks, and suddenly my dev env became stable, predictable, and repeatable … ;-)]


We don’t have a large team but we do have any number of contractors coming in and out. So getting them up to speed as quickly as possible is really important for us.

It took me a weekend to get the basics of Docker and Compose down and I rolled it out for a project the next Monday. Was able to get everybody on the project productive in a couple hours. I also put some basic images together we use for every project. So there was the time it took me to make those but it was probably 30 min. to an hour each. Of course, there’s the ongoing learning but as long as people are spending their time working on their projects and not fiddling with their environment I’m happy.

I imagine we’ve saved several thousand man hours over the last two years.

Every team is different and what we did may not make sense for everyone. This is just my experience.


I am a huge and strong advocate of not using MAC to develop software that will run under Linux.

Putting it in another perspective… Always strive to use the native platform for development and production.

Now why I am strong advocate of using Docker:

  • Parity between environments., like development, staging and production.
  • Same exact environment across all developers machines.
  • Same environment will be used in CI pipeline.
  • Immutable infrastructure, that means is reproducible as many times as you want.

Some additional benefits I like:

  • I can play with many versions of a software as I want without a need to install them in the host.
  • My OS is always in a pristine condition, almost as in the day I installed it from scratch, making upgrades a breeze.
  • Same development environment in home and at work is just at a distance of some key presses.
  • Security once I am reducing the attack surface in my Host… Yes I know that some claim that Docker cannot guarantee 100% that the container isolation is not violated, but still some more layers to overcome by any attacker.

GUI’s I run from Docker during development:

  • Sublime Text 3
  • Vscode
  • Android Studio 3 with the Emulator.
  • Postman
  • Mysql Workbench

Plus Docker Stacks for any programming language I want to code in.

Any new technology has always some people that simple are not open mind enough to give it a chance without leaving out their initial bias, while others just had a bad experience and than become strong advocates against it.

Without open minds and evolution the wheel would never been invented, the man would never arrived to the moon and I can keep going :wink:

The problem then of course is, how do you invoke that dockerized IDE?

No problem at all. Take a look at Intellij is running in the browser, it was started via a right mouse menu there (openbox, see config here The docker env must be running of course.

Plus you can work from home in a browser with your dev environment running at work.