I recently got an ARM laptop to replace my old amd64 one. This poses a problem for me. I run my production service on an amd64 VPS (Hetzner) that runs Ubuntu Server.
Thus far I’ve used a Vagrant box with Ubuntu to build a release in, and then uploaded that release to my production server, unpacked it, and restarted (it’s quite manual, yes, but I don’t release often). Now, of course, due to NIFs such as Comeonin and AppSignal, I cannot build on ARM.
How have you typically handled this? I found some earlier threads suggesting the use of Docker or building the release on GitHub (I use GitLab though):
Is this still the way to go? I have tried Burrito earlier, but I could not get it to build the release correctly (problem with NIFs).
I know nowadays I could run an ARM VM with amd64 binaries inside using Rosetta 2, but I don’t know how big of a hassle it would be.
One thought was GitLab CI, I guess I could use that to build a release out of every push to some branch? Could I get it to match the Ubuntu environment closely enough so that it would work? Though, I’m on a free account so I don’t know how much processing it allows me to do.
I think it use QEMU in the back yes, if it’s dealy slow I have no idea. I think that’s something you need to test for yourself.
Anyway if it’s just for building the release, I can’t imagine that even a 2-5x times slowdown is a deal breaker?
Check out https://getutm.app (which builds on qemu) for running OSs with different architectures on Silicon. But really the easiest is using CI for that, especially if building x86 which those services tend to run on (I use CI for x86 builds but actually also build releases on my Mac to deploy to ARM servers since that takes minutes instead of 1+ hour on CI which uses qemu for it, tbf a lot of the slowness may be due to Rust NIFs rather than the elixir build though)…
When building x86_64 docker image, I use above docker service.
On my MBP 2015, the build can be done in 4 min. But in the emulator on MBP 2021, the building process can slow 4 times, I think. (I didn’t measure it precisely. All I know is that my tea will be cold after the build. )
That’s exactly the reason why I push all my builds to CI from day 1. Usually the free plans are enough for my private stuff and for non private stuff a few bucks a month should be doable. No need to worry about where and how I do my edits, could even be the built in editor of the service hosting the repo for what it’s worth. I totally see that being a bit more work upfront, but it’s documented in code and rather flexible.
Not only slow, but I’ve never got it to actually work for me. It always ends up crashing for some reason.
I’m in the camp of just letting your CI build the images (we use GitHub Actions).
Kind of an aside, but related… BuildKit lets you make multiarch images pretty easily, using a Kubernetes cluster to farm out the build jobs to machines of different architectures. My M1 is so fast though, that I like to do the ARM build locally though. I have a little bash script to help with this.
Build ARM locally, then use a BuildKit Kubernetes builder to build AMD64 and mash them together in to a multiarch image:
Thanks everyone for the answers, I hope they are also useful for any future readers that might find this thread. I ended up building the release in GitLab CI which was surprisingly painless. I haven’t tested deploying the artifact yet, though, but it looks sensible enough when extracted.
I build releases for ARM systems on an x86 build server.
I take the files for the ARM system’s ERTS, place them on the build server and then reference them in the release I am building.
Here is a snippet from the mix.exs, “releases” section:
# Keep the docs
strip_beams: [keep: ["Docs", "Dbgi"]],
# Path to the ERTS copied from ~/.asdf/installs/erlang/24.2.1
The only provisio is that building any NIFs or port programs becomes much more complicated. The only port programs I have are very stable, never needing changes, so I include compiled versions as binaries in priv.
I was reading how others are doing it, and here’s what I gathered.
We can use GitHub Action-like services.
We can build directly on the server and replace the instance.
Blue/Green deployment using 2 VPS. (Spin up a new VPS, that is identical to the running VPS, and deploy in that VPS like it’s staging, then replace the running VPS with the newly spawned one or move the artifact across. (Shutdown the VPS we are not using))