How likely are we to ever have built in cross platform releases with Elixir?

This isn’t a critique, but rather just a genuine question.

By cross platform releases, I mean being able to run mix release and have it spit out something that could be run on multiple platforms. As things stand now, generally we need to run mix release on an os/architecture that matches where it’ll be deployed.

I tried searching around for some solid info on whether this is an actual goal of Elixir and if it’s being worked on. The results were pretty vague, but the feeling I got was that it’s not impossible, but no one is really working on it and that there wasn’t a lot of interest in it. I’m hoping to get a cohesive answer for this that people can find when they look for the answer in the future.

I’m interested to know:

  1. What people on the core team are expecting in the future regarding cross platform releases, and how they feel about it in general.

  2. How everyone else feels in the community feels about it.


In fact it is supported currently. There are 2 ways to do so:

  1. Build your release with include_erts: false in release config. However this will require you to have ERTS on the target machine with exactly the same version as you build it with. Additionally this can blow up if you will use NIFs.
  2. You will bundle it with ERTS for the target machine (via setting include_erts: path_to_erts_for_target). Then you need to ensure that system libraries (like OpenSSL) are in compatible versions. Other than that it should be ok.

Compile Elixir applications into single, easily distributed executable binaries

It does not yet support windows, but it sounds like even that should be doable.


It is worth mentioning that in fact this is just self-extracting archive with launcher, not “single binary” per se.

1 Like

I’ve tried this out with ERTS bundled, and I’ve run into issues every time I’ve tried it on a different os/architecture, which is actually what started my search for more info on this. I know it’s supposed to be technically possible but it seems unreliable, in the sense that the deployment system is likely going to have differences that will cause issues.


As hauleth mentioned, this doesn’t actually cross compile, you’d need to run that on a matching os/arch. I had the same thought and asked them on github issues. Their answer was no, at least not yet.

This is one of the reasons why I don’t think Elixir is a great choice for products/services that you need to run on not-your-own-infrastructure as an opaque artifact, i.e. that get run on customer-owned hardware far away from your own release management. (I’m taking that as the implication of your question about cross-platform builds, but I’d be curious if I’m way off the mark here.)

If you can’t control for the target OS and you can’t/won’t assume a simplifying abstraction like Docker that replaces the need with a different set of more palatable prereqs, you’re kind of SOL and need to find another solution/runtime language. Otherwise you’d need to produce end-user documentation with exacting detail, or perhaps you could produce more all-encompassing artifacts than a BEAM release, such as Amazon AMIs, firmware bundles to burn to SD cards, VM disk images, etc. It’s an unfortunate spot to be in.


Been thinking about this more recently. I was wondering how older erlang libs handle this stuff. I know ejabberd can install pretty much anywhere. I’m assuming those binaries and installers are over-and-above anything erlang offers out of the box, but I’d be curious to know what that flow looks like.

I’m also super curious about firmware stuff going back to Ericsson as well.

For some idea of the complexity involved look at the Nerves build tooling. And they haven’t solved the problem in the general case, IIRC.

I think it really boils down to a value / effort ratio. As @hauleth pointed out, in the case you have no NIFs in your deps, it’s already possible with a single configuration setting. However if your deps have NIFs that need C, Rust, and Go, then the complexity of the different compiler toolchains adds up quickly.


I had pretty good experience with having GitHub ci build one package per platform and ship those. It would be great to get some of that into mix release eventually (like macos building the .dmg / windows the zip / linux also a zip or an appimage. From the erlang world wings3d is to my knowledge the furthest here and has the build steps figured out (in fact even for a desktop app!) - thats were I learned a lot from.

For my own work i would love to generalize that and ship the additional build steps as mix dependecy you can include but I’m unsure how get started on that / e.g. how to integrate in the existing build steps as a library…

Well, you can do it by generating package for given OS. There were projects to help with that (I do not know what is their status now):

You can check that approach. Alternatively if you want that to be a service, then you can generate tar all that can be run by systemd-nspawn and just ignore most of the problems with runtime libraries.


I honestly do like GitHub Actions in a lot of ways but without using self-hosted runners that cost more to operate, or an intermediary like Docker, you’re limited to one distribution of Linux across 3 versions, one (latest) version of MacOS, and two versions of Windows Server. Not a very broad cross section.


I haven’t tried github actions, I’m going to give that a shot. That cross section seems like it would cover 80ish percent of cases, which is pretty good imo, but I see your point.

It sounds like we shouldn’t expect a built in cross platform compilation though is the gist I’m getting. It sounds like it would be very technically difficult and brittle?

I also don’t really understand how Go for instance does this, do they not also run into the same sorts of issues? How do they handle dependencies that need to use NIFS, OS features, etc?

In the words of @sasajuric in this latest video Elixir is a systems language, not a tooling language like go is. So I would expect cross-compilation to be a non-goal for such a language targeting essentially backend servers & infrastructure (which you control more often than not). While for go it looks like tools would need to run in lots of different architectures for every different user, so I guess it makes a ton of sense that they focused on such a thing.

I wouldn’t say it’s a non-goal, but I don’t think it’s a high-prio goal. While I don’t have any statistically significant sample, I believe that most companies that build backends control their own environment. If the system is deployed to external infra out of team’s control, shipping docker images is a fairly simple solution (that’s what we did at my previous company). If containers are not an option, you can still use docker on your own infra to build the image for the target OS (IIRC we did that too once), or manage a dedicated non-containerized build server. It’s not perfect, but as long as the system is not shipped to a large number of clients/users with various execution envs, I don’t see it as a significant problem in the grand scheme of things.


Go was built with cross compilation in mind and to my knowledge as a Go “outsider”, cgo is a thing, though that isn’t nearly as exotic as something like rustler.

There is an amazing c compilation In Zig post that details the work required to support compilation for various targets:

1 Like

just rewrite the BEAM in zig! LOL. On the other hand the BEAM has like what, 11 internal allocators? So it already sort of fits the programming paradigm, they just don’t know it yet.