What do you need from a deployment tool?

From @bcardarella video it’s pretty obvious the current state of things is a problem for them and their clients, so for them I would guess the goal would be to address the pain points that they and their clients have. Also resume driven development aside for good majority of projects you do not need mesos, k8s, terraform etc. and to me it often looks similar to people building crazy big data setups to deal with 50GB of data. I do not have stats on hand but I would bet that a good majority of projects are deployed to 1-4 fixed boxes/instances so making even that simple case dead simple out of the box would go long way toward improving adoption.


No useful suggestions from me, however I’m interested in working on something like this, but I’m not available full time. Can whoever picks up this project call me I’ll help with some tasks, as much as I can…


Distillery + a fairly simple (multi-stage) Docker file is a pretty simple to set-up for Kubernetes et al; it is ‘bare metal’ or at least ‘bare VM’ is where the pain-point is (BTW I really appreciate the work @bitwalker has done on Distillery).

I used Distillery + CloudFormation + edeliver + asdf to get some EC2 instances up for building and running (before we adopted K8S), but the need for a separate, target-architecture build machine, and understanding Erlang’s build-time configuration were problematic for me, in going from running everything with mix or iex, to stand-alone deployment, for the first time.

Personally, I think the run-time configuration thing needs a bit more thought: if we can have straight-forward, easy to understand, and standardised support for 12-factor config, that would go somewhere to solving the ‘production leap’ that faces an Elixir adopter.

I was hoping that the {:system, var} tuple was going to end up being a standard, since it is simple, and understandable, but there seems to be a reaction against it now - mostly because it isn’t, actually, standard enough! (the other point on there, supporting other types of config providers, is solvable by making the thing that interprets the tuple pluggable).

Mix’s config.exs is a great bridge to the Erlang way of doing config things, but I feel Elixir needs a generally accepted way of the Elixir way of doing config things.



What I’ve always wanted is where you can just give it an anonymous function to return whatever you want (optionally cached on first call into the Application of course). Then you could do whatever, look up from the system environment, ask the database, perform a remote call to a Cthulhu dimension, whatever.


Hm… For phoenix, a default Ansible playbook that uses destillery , with support for postgresql and Letsencrypt on top of raw cowboy or cowboy behing Nginx as a reverse proxy. Ansible is not the most portable thing but it gets the job done. For other applications I don’t know enough to suggest any ideas.


I actually do like the direction that Michał suggested for libraries!

When working on my own deployments and researching my recent blogpost, I really learned to appreciate this approach. I think it’s a great idea to have this kind of configuration where it’s being used.

With Phoenix and Ecto and Phoenix now both offering callback configuration, I really hope this will be adopted as the new standard by more and more applications. I will certainly implement it for the next major release of Belt.

1 Like

I have my own personal library of macros that call into a Cthulhu dimension at compile time for some computationally intensive tasks, which makes them useful to detect race conditions and solve the halting problem on my code. The BEAM philosophy of “let it crash” is a boon for those like me who work with unstable and flaky inter-dimensional connections, and the power of Elixir macros allows for easy coding of these zero cost abstractions to check for correctness at compile time. Actually, I had to sign an EULA with Cthulhu, which I didn’t actually read, but it’s probably ok: “Something, something debt something something countless primeval aeons something something repaid in blood”, didn’t really pay attention.

Non-determinism is a problem, though: when working with dimensions that don’t respect the linear flow of time, it’s hard to guarantee a stable order of events, which is a challenge on the BEAM. I’ve heard Javascript would probably shine in this space, but I really don’t want to depend on node.

I’d put them up on GitHub, but then I’d have to sacrifice some goats to Azatoth, or Facebook owns my code and possibly my soul (don’t know, didn’t really read the license). Whatever.


Call-backs are certainly flexible, but they just shift the problem onto the library user: I still have to work out how to load the config from the environment, and now I have to implement the particular call-back for every library I’m using, rather than do it centrally. It’s ‘I surrender: sort it out yourself’ :wink:

When I go to an unfamiliar application, one of the things I ask is “where’s the config?”. If it’s in a config.exs file, even with funny tupples, I can immediately see where it’s at; if it’s spread across several call-back modules I now need to hunt for those, and there’s no particular pattern to search for.

On the other hand, if I can find the call-back, at least I can be sure what config it is actually using, rather than there possibly being a disconnect between what’s configured, and what’s actually used, which makes the configuration potentially bloated and fragile.

Thus I don’t think we’re there yet with either solution.

Here’s a thought: in a Phoenix project you can do mix phx.routes and see all the routes, and the modules they are implemented in; imagine if you could do mix config and see what modules needed what config?


This is good for static configuration. And I’m totally with your that visibility is key in that regard. But there’s also dynamic configuration, like if you want to start a custom ecto repo based on user input (imagine a db management tool). That’s where the runtime configuration aspect comes into play. In such a case callbacks are not shifting the problem to the library user, but they actually enable this type of application.


I fall into the category that deploy services to single machines using Ansible. If there is an official “deploy” tool I would hope/expect that it doesn’t impose any extra burden and just expose primitives that I can choose to use or not.

Certainly no dependencies, certainly not assumptions on Docker or any other VM being available, certainly no bash scripts (Windows!). It should be only dependant on pure Elixir/Erlang.

So far, for me Distillery works fine, it creates a release which I can then copy to the remote server. I still have to write a systemd script but there is documentation for that and it’s simple to do. It also tries to support Windows (which I don’t care about, but good to know).


Reminded me of this this :smiley:

On the topic, I mostly want a configuration convention that everyone uses.


The Elixir build tool problem? Or something with a broader scope?

1 Like

I’m in agreement with @DianaOlympos, @talentdeficit, @wmnnd, and @bitwalker here. Creating a new “deployment tool” specific to the Elixir ecosystem seems like it’s a solution in need of a problem.

When I first started using Elixir, I often heard that deployment was an issue, and I started repeating that as if it was a mantra. The company I worked with at the time started by deploying to Heroku, and that worked well for our purposes. I moved companies, and my current company was also deploying Elixir to Heroku. As our needs grew, I learned how to build releases with Distillery. I put together a deployment pipeline using non-Elixir tools and started shipping releases from our build servers through to staging and production.

And now I question what this “deployment is an issue” comment really means.

Right now my team uses AWS CodeDeploy for deployment purposes. A release artifact (built using Distillery) is bundled with rules for CodeDeploy’s engine. The bundle is pushed to S3 from our build server and registered with CodeDeploy. Our current setup requires manual triggering of a deploy, and we currently use the CodeDeploy UI for that. We utilize the same bundle for both staging and production. Our system loads as much of its configuration as possible at runtime via environment variables.

I don’t solve deployment using Elixir focused tooling. I use generic tooling because it’s a generic problem. I could use the same tools now with Go, Java, C#, etc.

I could also build it into an AMI using Packer, seal it into a Docker image, scp it to a fleet of servers using Ansible, bundle it into a .deb…there’s a list of generic tools that solve these issues.

What I think we need is more documentation around best practices for building releases, managing runtime configuration, integrating with existing infrastructure tooling, and incorporating operating concerns into our codebases and executables. That’s what I’m picking up from this conversation and other conversations I’ve had in the community.

From what I can tell, people want/need:

  • A clearer picture of why to use releases and why running in production via mix is not best practice
  • Guidance on runtime configuration management (and particularly helping package authors write configuration-system agnostic packages)
  • More information about taking a release and running it on Ubuntu/Amazon Linux/Docker/K8s/the back of a turtle; this includes how to publish the release to the unit, write startup scripts, redirect logs to files or services, manage starting/stopping the release
  • Expanded guides on building operations logic into the codebase and releases (e.g., performing Ecto migrations without mix, and other things one normally would have made a rake task or the like for)

If someone asks you, “Now that I’ve built it, how do I launch it?” they’re looking for instructions. That doesn’t mean another tool is necessary.


I’m not sure that it’s a solution in need of a problem.

As an example, I have no desire to do any of what you just said to deploy my applications, hence I use Heroku and put up with the cost and other down-sides.

If there was a tool that improved either the costs or the down-sides of Heroku, I’d definitely be interested in that vs rolling my own deployment system. I can’t help but feel that there are others like me with no devops experience or desire to learn more devops who would be interested in something like this.


@benwalks But how would you improve the costs or down-sides that $PaaS might have with a different deploy tool? An Elixir-specific deploy tool wouldn’t administer your Linux server and manage your database services for you.

1 Like

I guess I was referring to the fact that I can git push heroku master my application up to Heroku on a whim.


i don’t think anyone is arguing that deployment is easy or trivial. i just don’t think an elixir native tool is going to be any easier to use than tools like ansible, terraform, chef, etc. these tools aren’t bad because they are written in non-elixir languages, they are bad because deployment is a particularly hard problem to solve in a general way


But in this case, git is your deploy tool. Following what @wmnnd was saying, you’re looking for an Platform-as-a-Service that is tailored to the needs of Elixir applications. Something like that won’t necessarily be helped by a new open-source Elixir deploy tool. It sounds like you want @jesse’s Gigalixir or something similar.


A tool that makes a basic case easy would not be a bad thing to have.

1 Like

What are the specifications of the basic case you are referring to?

1 Like