Distillery 2.0 has been released

Here’s a quick overview of the major changes:

  • A solution to the problem of runtime configuration
  • Improved experience around hot upgrades/downgrades, namely better support for custom appups and programmatically modifying them
  • Out of the box support for generating PID files
  • Better and more consistent primitives for custom commands and hooks
  • Improved errors and better feedback from the CLI
  • Major improvements to the documentation: new guides, better organization, searchable docs, and more

I’ll keep an eye out in there, in case there are any questions :slight_smile:


I’ve already using v2.0.2 with edeliver. Config provider is great to load runtime configuration, a good idea.

Thanks @bitwalker


For someone whose only experience with deployment is git push heroku master, what is your recommendation for how to go from zero to “perfectly comfortable deploying a production app, replete with staging/testing/prod/etc. environments, and understanding the benefits and tradeoffs of terraform/ansible/docker/kubernetes with respect to real-world elixir deployments?” I suppose I’m not alone in trying to find the sweet spot between “saves me time,” “can fit the whole thing in my head,” “powerful,” and “when something goes wrong, setup is standard enough to find help when I need it.” How do y’all do deployments at DockYard?

Thank you so much for your work on Distillery! <3


You may be interested in a guide I wrote as part of the new docs in 2.0, Deploying to AWS. It pretty much walks you through setting up your own “Heroku in AWS” architecture, i.e. you can git push origin master and have changes rolled out to production automatically. With some minor adjustments you can tweak the architecture to support staging + production, with a manual approval step for deploying staging to prod, though I haven’t covered that yet in the guide - but it’s pretty straightforward once you have some familiarity with CodePipeline and CloudFormation.

what is your recommendation for how to go from zero to “perfectly comfortable deploying a production app, replete with staging/testing/prod/etc. environments, and understanding the benefits and tradeoffs of terraform/ansible/docker/kubernetes with respect to real-world elixir deployments?

That’s a huge question haha. It depends entirely on your comfort level with ops tasks, e.g. spinning up new infrastructure, configuring it, etc. If your only experience with operations is deploying to Heroku, there is a lot to learn before you will feel comfortable with owning the whole stack - even ignoring setups using stuff like Kubernetes/Mesos/etc.

Even though it’s a lot to learn, you don’t have to learn it all at once to be able to deploy things. If you can swing it, I would get budget to be able to experiment in AWS with different services - try things out and see how what works and what doesn’t. Even if you can’t get budget from your company, there is a lot you can do just with the AWS Free Tier - it is a bit more limited than what you can do otherwise, but you can experiment with a lot of things. I had a lot of ops experience already before I ever touched AWS, but I got proficient with AWS by doing the above - just experimenting with the free tier. The bottom line is that you start with doing things by hand, spinning up servers with the config you need, deploying your app - once you comfortable at that level, it’s really all about automation; how do you take the slow or error-prone or repetitive bits, and have code do the work for you. This is ultimately what all of the tools you listed are about.

I’m personally not a fan of Terraform, I use it when I have to, so I won’t say much about that - I would recommend using either the cloud provider’s tools (e.g. CloudFormation), or something like Salt/Ansible/etc. Speaking of Salt/Ansible and so on, as far as moving from doing everything by hand, to automating things, they are a great set of tools to get some easy wins; they are basically automating the exact same things you’d do by hand, so they “fit in your head” fairly easily.

Docker (and more generally containers) and the orchestration tools you use with them, are intended to solve a few problems, one of which is deploying apps with different (sometimes conflicting) software requirements/dependencies to some set of machines, and using the resources of those machines in the most effective way possible. They automate failure recovery and scaling, by spinning up new containers in response to crashes or metrics respectively. By abstracting the resources the containers run on, you can add more container hosts transparently, move containers to other hosts, etc. This level of abstraction has benefits when you are operating at scale, or operating multiple applications/services with different teams and requirements. Docker of course is also useful as a development tool, since you can spin up an application which is identical to how it will run in production (sans config differences), and is one of the reasons why using Docker in production is so popular - ideally, no more “but it worked on my machine!” (of course there are still ways this falls apart, but I digress).

So, all of that to say, my recommendation is to start simple, and work your way into “fancier” setups as you need to. As you gain familiarity with different parts of the infrastructure stack, learn how to automate what you can, and you’ll find that the various tools to assist in that automation will make a lot more sense along the way. The tradeoffs will become much clearer when you have specific goals you are trying to achieve. Trying to start with something like Kubernetes without understanding what goes on underneath it, and what it is trying to solve for you, is going to end in a lot of pain, in my opinion anyway. As far as Elixir goes, it can be deployed anywhere really, no specific infrastructure is going to be “better” for Elixir than another, that comparison can only be made in terms of the needs of your application, or the costs involved, including the time required to maintain the infra.

At DockYard, we use releases, but as far as infrastructure goes, it varies from project to project, depending on client needs and the needs of the specific project. Not very helpful, I know :stuck_out_tongue:


Congrats @bitwalker (and whomever else contributed!). This is a huge release and it looks like a lot of great work.

Those docs look incredible. I’m really happy you took the time to write that up; it’s just as important as the library itself.


Congrats @bitwalker (and whomever else contributed!). This is a huge release and it looks like a lot of great work.

There were several extremely helpful, brave souls living on the edge, who helped me validate the release and gave me some great feedback - to everyone that helped, thank you so much! It’s hard to overstate how difficult it is to test all the different combinations of setups/architectures/targets/etc., that Distillery gets used with, and having people willing to try things out in their environment is a huge help in that regard. Especially with some of the changes in 2.0

Those docs look incredible. I’m really happy you took the time to write that up; it’s just as important as the library itself.

Thanks! I attribute a lot of that to MkDocs being awesome haha. A lot of contributions went into the new docs as well. I finally had time to really rework a lot of the documentation, and the MkDocs feature set makes it a lot easier to organize and call out important information.


Yes, the document is unbelivable. It also contains several practical guides. Maintain such a wonderful document is a hard work.

I’d be curious what made you not use ExDocs though?


Great news! Thanks for all the hard work!

I have a question about this part:

Your application should be designed to receive configuration at boot, read it from the application env, and then pass it down your supervisor tree, rather than reading directly from the application env when needed. There is nothing enforcing this rule, but config providers are specifically designed with this approach in mind, and are not intended to be used to fetch configuration dynamically once the release has booted.

Am I reading this correctly that there should be no Application.fetch_env or anything similar in code other than in the startup of the application?
Can you explain why that is?

1 Like

Important announcement regarding deployment tools :slight_smile:

Great work Paul! Thank you!

Pinned the thread for a few days to help ensure it’s not missed :023:

Great work @bitwalker!!

Looking at the source and reading the docs it looks like REPLACE_OS_VARS still works if you want to continue doing that from the system environment… but assuming the new Config Provider reading runtime config from .exs, .json, .toml, .yml, :httpc, etc etc is now the recommended way to do it?

Is REPLACE_OS_VARS deprecated or is it staying supported indefinitely for backward compatibility?

It’s fine for module docs, and basic markdown, but I found it insufficient for guides with more “complex” content - namely content with asides/notes/warnings/etc. It was hard to draw attention to items of importance in the docs, so things were often missed by people. The new docs are much cleaner formatting in that regard. I actually do still generate the “normal” ExDocs documentation, which you can access from the sidebar via Module Documentation :slight_smile:

Yes, that is my recommendation, and I believe the core team more or less supports that assertion as well, but I don’t want to speak for them. The reason for that is the application environment is effectively global state - using it directly in your code which cares about some piece of configuration means you have tied that code to this global state. This makes testing more difficult, as you can’t test multiple configuration side-by-side, and changing that config in test setup may impact other tests. It also makes code more difficult to reason about, in the sense that something external can change the configuration at any time, so you need to be able to react to those changes correctly, or your code will likely fail.

This is why I recommend fetching all your config in the start/2 function of your application callback module, and then pushing it down your supervisor tree as parameters to the different components in the tree. Supervisors can in turn pass these parameters to workers, and they can store them in their state for use during operation. This is more or less an extension of normal functional programming practices - your processes are just functions acting on the state they are given (this of course ignores the varies ways processes are otherwise stateful, but that state is generally local, and therefore easier to reason about). The less you depend on the app env (or global state in general), the easier it is to predict the behavior of the system based on the initial inputs.

If your app env is entirely static, then it’s less of a concern - but you still have the issue of testing multiple configurations; that global state will make things a lot more painful. If you can just pass in configuration to each component, it is a lot easier to test a bunch of different scenarios in parallel. Passing in configuration applies even to things like registered names - ideally you should be able to start multiple instances of what would normally be a singleton process in your tests, by giving a name where you would normally default to __MODULE__.

It’s early in the morning for me and I’m wandering a bit, but I hope that clarifies the reasoning - hopefully someone will come along and clarify further :slight_smile:

It is staying for the time being, since it is still needed for vm.args, and there is the question of backwards compatibility of course. I am pursuing a path which would extend config providers to VM flags as well, in which case we could theoretically deprecate REPLACE_OS_VARS, but we’re not there yet (it is likely going to be quite some time before the pieces are all in place for that).

Config providers are the recommended way to do configuration moving forward, but you can still use REPLACE_OS_VARS as before, for the time being - the simplest migration path is to simply create a config.exs to handle setting the various config settings you need environment values for, and use the Elixir config provider.


It’s entirely clear. I agree that it’s a better practice this way. I was just wondering if there was an other reason than that it’s a better programming practice. Maybe I should have clarified my question better.

What if I have modules that need configuration, but do not need to be servers? I wouldn’t want to guide people towards creating GenServer just because they need configuration data at runtime. Your comments about testing are valid; the only way I’ve dealt with this is having setup and teardown code that sets and clears configuration.

1 Like

Those should just use parameters: https://michal.muskala.eu/2017/07/30/configuring-elixir-libraries.html#stateless-libraries


Completely agree, it would be nice if elixir had partial application for things like this though (oh, one can dream :slight_smile:)

That is a guide for designing libraries. In an application, the parameters have to come from somewhere.