Phoenix deployment like an uberjar

Recently I programmed in Clojure again after some time and it struck me how incredibly simple the deployment process is.

I used leiningen to bootstrap the project and all I had to do for the deployment was lein uberjar and then copy the result to the server and run it with java --jar build.jar. And this is working completely independent of the target platform, as long as there is a Java runtime installed (e. g. I bundled it on NixOS and ran it on my OSX machine).

So the question is: given a target machine with an Erlang installation, can we package up a phoenix application in a completely platform independent way? I don’t necessarily need clustering, a Repl on the deployed machine or these fancy things we can do with distillery or mix release.

I’d like a way to deploy from my local machine, or a CI, to any target without having an elaborate Docker setup and without having to deal with mismatching libc or bash paths…

Can I leverage escript to do that?
Is there a way to do it with distillery that I don’t know about?

There’s few parts to that complexity: There’s native dependencies of ERTS (erlang) and there’s native dependencies you pull in with libraries you use. Both can have built time and runtime dependencies.

By installing erlang on your target system you basically care for the ERTS part. You can build releases, which do not include ERTS, which will then fallback to the one installed on the system the release is started on. Keep in mind though that the erlang versions afaik need to match much more closely than e.g. java versions when running .jars, but it’s in a grand scale what java does as well. You can also build releases including ERTS for other systems than your local one.

All this doesn’t help for whatever you pull in as additional dependencies, which might run or even build native things. Common examples include the appsignal library, most of the comeonin hashing libraries and more. If they have build time dependencies, then those need to be available wherever you build your release. If they have just runtime dependencies, then your target system needs those to be installed.

The last option is not running a release, but using mix to run a project. This basically means all those above dependencies are shifted to the target machine and stuff needs to get built on there. That’s usually exactly what you don’t want to happen on machines running your service. Not having that additional (often quite big) load is one of the reason for CI and building somewhere else in the first place.


Yeah - this is precisely what I’m currently doing for the project at hand because I was out of options…

Meanwhile I honestly think that for the stuff I’m currently doing I simply need to use other technologies because those projects simply don’t benefit from running on the BEAM (since they are rarely in need of running clustered). Hence it is not worth it to pay this deployment tax. I think this is one of those situations where you need to “use the right tool for the job”…

I simply hoped I had forgotten or overlooked something, in order to keep using what I know and love :confused:

Thanks for the input.

To be fair the biggest part of that “deployment tax” is not inherent to the platform at all. It’s caused by dependencies you choose to use, which in turn depend on native code, which need to be build at compile time.

The “native code build at compile time” is what causes 95% of the complexity. At that point you basically have an elixir/erlang dependency and a c/c++/rust/… dependency. The latter ones are more complex than a pure elixir/erlang dependency, no matter if bundled for elixir or any other language. E.g. I’m using a php library, which is implemented in C (as an extension) and I cannot use it on anything other than linux, because it’s closed source and only distributed compiled for linux. For elixir/erlang libraries usually the sources are distributed instead of compiled artifacts, so you yourself can build the native dependency for whatever system you need it on.

There are even few hybrid solutions: E.g. rambo distributes precompiled binaries for common systems (no runtime dependency), but also means of using rustler to build the binary for any other system.

All that complexity in my experience doesn’t come up in other languages, because things get implemented completely in that language instead of with native dependencies. Do the same on the beam and ERTS will be the only thing you need to care for – which imo is not much different to caring for e.g. a java runtime.

That is certainly true. But still: the ecosystem is structured in a way that makes it hard to forgo all native dependencies (as you noted in the linked thread with the encryption libraries) and hence causes a small tax.


For the stuff I’m doing it’s simply looking less attractive since I have wildly different deployment targets for different customers. In practice this caused me a lot of trouble and costed lots of unbillable hours of work which is why I hoped that I was simply doing something stupid :smiley:


Don’t get me wrong: I still think that this is totally worth it for projects where you have control over every part of the stack.

This one is implemented in pure elixir.

Also I hope the upcoming JIT will help in that area as well. Quite a few native dependencies are because the beam itself is not the fastest in places, but results like jason being as fast as jiffy with the JIT will make that decision move more in favor of skipping the native dependency.

1 Like

Probably I could also try and build Erlang statically with musl and then package this up as a self contained release. Although that sounds like a lot of work :smiley:

Static compilation can cause problems with NIFs as well.

I have yet to try this, but have you looked into Bakeware? It looks super promising, and they provide phoenix examples.


Bakeware is just for bundling an otherwise functioning release on a “single executable” like manner. It doesn’t really tackle any of the compilation complexities.


After thinking about this a little longer I think the best option for my use case would be something like building an AppImage or Flatpak, since those are self contained.

If I ever manage to produce something of value, I’ll report back :wink:

I made some very small progress in this direction. Flatpak at least seems to need some additional software on the target system (I’m not sure yet about AppImage…).

But since I’m a NixOS user I though “maybe there is a way to pack all dependencies together with nix” and it turns out, there is:

I think, at least in theory, that it should work. The idea is to build a tarball with distillery on a nix-shell and then use that tarball to build a nix-bundle. In practice I encountered problems starting the application after bundling it because it will not find he bundled erlang runtime. This is probably due to some configuration issues on my part but it seems to cost me a lot of time to get this going - time which I don’t currently have to spare. Apart from that, I now traded the dependency on docker for a dependency on Nix and I haven’t won much in terms of maintainance work.

So in the end I’m back where I started: probably the easiest solution going forward is to have as many docker base images as I have deployment targets and call it a day :slight_smile:

1 Like