Deploy docker to digital ocean

I have followed this tutorial
successifully but I have no idea how to deploy the result container to digital ocean.

Please help.


Try to create a droplet with CoreOS template in Digital Ocean.

CoreOS is a specialised Operating System to run containers, thus Docker is supported out of the box.

I have note read the gist, thus I cannot tell you exactly the docker command, but if the gist has it than you should only need to run it on the droplet shell. This assumes that the Gist tells how to create the docker image and how to upload it to a private registry.

If not you can always use the Gitlab private registry for containers and if you want to go further you can use their Auto DevOps for automated deployments as per

1 Like

I created an Ubuntu droplet and then added my code via git and it worked.I asked this question because am not sure if that is the best approach.

Thanks for your answer.

Hi, author of the tutorial here, glad you enjoyed the guide! I actually use this strategy on my own $20 DigitalOcean droplet. If you have a lower tier droplet, you might not be able to build the release on the server since Distillery needs a fair bit of memory to build. That is why the guide specifically allows building the release on your development (local) machine and then running the resulting release on the actual deployment machine.

So, you have two options:

  1. Build and run the release on your droplet
  2. Build the release tarball on your local machine and then run it by building the runnable container on the droplet

Either way, I didl’t really write up anything on moving the actual resulting container/image to your droplet. That’s probably possible, but I’m not well versed in it.

The way I do it is that I just build the release and immediately run it on the droplet itself because mine can handle it. So, I just git pull the code onto the droplet, run to build the release (a tarball will be generated in the _build folder), and then run docker-compose up to run your app! Basically, that’s how iI do it and it looks like that’s what you tried also. Really, its fine for a basic setup. Again, Docker definitely has ways of pushing the built image up from your dev machine and then pulling it down onto the droplet to deploy it, but I haven’t looked into it much yet.

Let me know if you have any questions! :slight_smile:

1 Like

Using Ubuntu or any other normal Linux distro is not the best approach, once you are running containers you want to use an Operating system specific for that… I said CoreOS because is very easy to deploy in Digitall Ocean.

Another alternative is to use SmarOS but not sure id Digital Ocean has already templates to build droplets for it.

Doing this just for fun is ok, but for real production use cases should be avoided.

I definitely will read you approach and try to see in what differs from mine so that I can learn and improve my knowledge about build Elixir releases.

My Current Approach

When developing in Elixir I use a 100% Docker workflow with this Elixir Docker Stack.

When I want to deploy to my server I just build the release with this Docker Image in my laptop and then I push the Docker Image to the Gitlab private registry for containers. Afterwards in my CoreOs server I just use a Docker Compose file to run the Elixir App with volumes mapping to the host for secrets and persistence.

By now this Elixir Docker Stack is a work in progress and my releases are not for real production use cases, they are just for fun and learning purposes.

My Future Approach

My future goal is to build and release from a Gitlab CI pipeline using the Gitlab Container Registry and maybe their Auto DevOps workflow that will include the use of Kubernetes.

For now my priority is to learn to properly code in Elixir and leverage the OTP way of doing things, thus I will leave this approach of auto deployments for later, but I am keening to learn more about deploying Elixir Apps, thus I hope that more developers jump in and share their way of doing it.


smartos is more like a hypervisor rather than an os, so there is little chance digitalocean would ever be able to support it.

thus I hope that more developers jump in and share their way of doing it.

Currently, I build the release in either docker or vagrant for several os/cpu architectures, then push it to a cloud storage, then spin up a vm in any cloud provider via terraform with a shell script which downloads the release and starts it up. Later, I’ll start building the releases in the cloud to avoid running vagrant and docker on my laptop.

I used kubernetes for a project and found it way too complicated for my current use cases, so I tend to just use either erlang’s distribution mechanisms or lasp/partisan over tls. So the life-cycle of my deployment is like this:

  1. Build the release for some specific os / cpu architecture.
  2. Push the release to some cloud storage or cdn
  3. Spin up a cloud vm on any cloud provider (currently, I mostly use scaleway, digitalocean, and google’s preemtible boxes)
  4. Download the release for the os / cpu architecture of the cloud vm.
  5. Start the release.
  6. Once the release is started, it connects to the existing mesh of nodes (of the same app version) over tls.

One big downside of kubernetes for me is that it seems to be design to work within a single datacenter, whereas I usually deploy over several.

I know about the existence of SmartOS but not had time to read too much or even try it… Good to know it cannot be used in same fashion of CoreOS.

I think I will never go down this path… currently doing it on my laptop just for learning purposes but after will do it in a CI pipeline.

Definitely using Kubernetes is not tailored for simple use cases and pet Apps, unless as a learning Playground for later real usage in more complicated deployments where Kubernetes may shine.

It is possible to deploy to multiple zones in the cloud, please see

after will do it in a CI pipeline.

I meant that homogeneous apps like mine (which can use only erlang) don’t really have any need for kubernetes and/or docker. Also, I don’t see the need for a consensus (kubernetes uses etcd) in my deployments, and a bit afraid that it would only slow down the system when I start adding more data centers.

It is possible to deploy to multiple zones in the cloud, please see >

Yeah, I’ve seen that, but that’s much more complicated than my current approach (with client-server partisan topology) and wouldn’t actually work for my use-case (this approach would only work for multiple zones, not multiple datacenters). For multiple datacenters, I’d need to use etcd or consul to connect the different kubernetes clusters, which would be much slower than just using partisan over tls.

Don’t get me wrong, kubernetes is fine, it’s just not as good as a tailored solution, which is, thankfully, quite possible and not that difficult with erlang. Setting up my current pipeline (and learning a bit about distributed computing along the way) took me less time than to learn how to properly use and setup a single kubernetes cluster.

No I don’t… Kubernetes is still a goal to achieve in my DevOps aspirations :wink:

I believe that for Elixir/Erlang the use of Kubernetes may not be the best solution for the majority of us that do not have large applications and that current solutions in the OTP ecosystem will suffix. After all the BEAM supports much more workload, in orders of magnitude, than other traditional VMs, thus delaying the need for complex deployment systems :slight_smile: