I recently got an ARM laptop to replace my old amd64 one. This poses a problem for me. I run my production service on an amd64 VPS (Hetzner) that runs Ubuntu Server.
Thus far I’ve used a Vagrant box with Ubuntu to build a release in, and then uploaded that release to my production server, unpacked it, and restarted (it’s quite manual, yes, but I don’t release often). Now, of course, due to NIFs such as Comeonin and AppSignal, I cannot build on ARM.
How have you typically handled this? I found some earlier threads suggesting the use of Docker or building the release on GitHub (I use GitLab though):
Is this still the way to go? I have tried Burrito earlier, but I could not get it to build the release correctly (problem with NIFs).
I know nowadays I could run an ARM VM with amd64 binaries inside using Rosetta 2, but I don’t know how big of a hassle it would be.
One thought was GitLab CI, I guess I could use that to build a release out of every push to some branch? Could I get it to match the Ubuntu environment closely enough so that it would work? Though, I’m on a free account so I don’t know how much processing it allows me to do.
I use an integration server that runs x86 and that does the build. But you could also buy a $4 x86 VM, rsync your project tree to it, and do a “mix release” - I also do that sometimes.
Building on an other server, gitlab or GitHub are definitely options, but what they’re probably doing is building on docker. That’s what I would suggest. Docker seems like a good option.
A $4 VM would increase my monthly costs by 20%. I guess I’ll look into doing it in GitLab CI. Thanks!
I thought Docker is just a virtualization environment, and thus I couldn’t run amd64 containers on ARM unless I went through something like QEMU which would be deadly slow, right?
I think it use QEMU in the back yes, if it’s dealy slow I have no idea. I think that’s something you need to test for yourself.
Anyway if it’s just for building the release, I can’t imagine that even a 2-5x times slowdown is a deal breaker?
We use Hetzner too. With their CLI tool, you can deploy a new VM, install OTP, rsync your program, release, download, shutdown and you’d incur in 15 minutes of usage total… so just a few € cents.
Check out https://getutm.app (which builds on qemu) for running OSs with different architectures on Silicon. But really the easiest is using CI for that, especially if building x86 which those services tend to run on (I use CI for x86 builds but actually also build releases on my Mac to deploy to ARM servers since that takes minutes instead of 1+ hour on CI which uses qemu for it, tbf a lot of the slowness may be due to Rust NIFs rather than the elixir build though)…
From the tags of this post, I guess you are using Apple M1.
I’m using Apple M1, too. And I has the same requirement of deploying to x86_64 machines.
- use GitHub - utmapp/UTM: Virtual machines for iOS and macOS
- create an x86_64 emulator. (not a virtualizor)
Because it’s an emulator, the building process is very slow. Before building, prepare some drinks and drink slowly.
EDIT: After I wrote this, I found @mayel had already wrote something about it.
Use docker multi arch containers. Setup amd64 and build app inside it
@c4710n I’m curious how slow? I’ve seen how slow emulating ARM on x86 is (using github actions specifically) but haven’t actually tested building a release for x86 emulated on ARM.
In my case, I’m using:
- an x86_64 linux emulator
- a docker service running on above emulator
When building x86_64 docker image, I use above docker service.
On my MBP 2015, the build can be done in 4 min. But in the emulator on MBP 2021, the building process can slow 4 times, I think. (I didn’t measure it precisely. All I know is that my tea will be cold after the build. )
That’s exactly the reason why I push all my builds to CI from day 1. Usually the free plans are enough for my private stuff and for non private stuff a few bucks a month should be doable. No need to worry about where and how I do my edits, could even be the built in editor of the service hosting the repo for what it’s worth. I totally see that being a bit more work upfront, but it’s documented in code and rather flexible.
Not only slow, but I’ve never got it to actually work for me. It always ends up crashing for some reason.
I’m in the camp of just letting your CI build the images (we use GitHub Actions).
Kind of an aside, but related… BuildKit lets you make multiarch images pretty easily, using a Kubernetes cluster to farm out the build jobs to machines of different architectures. My M1 is so fast though, that I like to do the ARM build locally though. I have a little bash script to help with this.
Build ARM locally, then use a BuildKit Kubernetes builder to build AMD64 and mash them together in to a multiarch image:
# mbuild <dir> <repo> <tag>
# mbuild elixir $AW_REGISTRY/1.14.0-erlang-25.0.4-alpine-3.16.1 latest
docker build $1 -t $2:arm64
docker push $2:arm64
docker buildx build $1 \
--platform linux/amd64 \
-t $2:amd64 \
docker manifest create $2:$TAG \
--amend $2:amd64 \
docker manifest push --purge $2:$TAG
Thanks everyone for the answers, I hope they are also useful for any future readers that might find this thread. I ended up building the release in GitLab CI which was surprisingly painless. I haven’t tested deploying the artifact yet, though, but it looks sensible enough when extracted.
Oh, and here’s the configuration I ended up with (permalinked to the current version, so maybe check if it has changed if you come from the future): .gitlab-ci.yml · e037b3c8245f2db487a13539062de726749b6156 · CodeStats / code-stats · GitLab
Let me know if you think I could do something better!
Very very slow
I’m in a similar situation, my cloud infra is x86_64 (Ubuntu) and cannot just build on my M1 mac mini. So I also tried UTM on M1 with x86 emulation … its was pretty much unusable for builds.
So I run my builds on my 2015 intel MBP, which runs UTM (virtualization) for the same architecture. The UTM instance is running Lubuntu and my builds are way faster than on M1
I build releases for ARM systems on an x86 build server.
I take the files for the ARM system’s ERTS, place them on the build server and then reference them in the release I am building.
Here is a snippet from the mix.exs, “releases” section:
# Keep the docs
strip_beams: [keep: ["Docs", "Dbgi"]],
# Path to the ERTS copied from ~/.asdf/installs/erlang/24.2.1
The only provisio is that building any NIFs or port programs becomes much more complicated. The only port programs I have are very stable, never needing changes, so I include compiled versions as binaries in priv.
I was reading how others are doing it, and here’s what I gathered.
- We can use GitHub Action-like services.
- We can build directly on the server and replace the instance.
Blue/Green deployment using 2 VPS. (Spin up a new VPS, that is identical to the running VPS, and deploy in that VPS like it’s staging, then replace the running VPS with the newly spawned one or move the artifact across. (Shutdown the VPS we are not using))
Blue/Green deployment, but using tools like Packer & Terraform.
We can use tools like SCP, S3, CROC to move our build artifacts across servers.
P.S. Still learning by experimenting with all the ways to do it.
I always have a pc at home, powered-on, reversed ssh tunneled to one of my VPS. So I do not use my MacBook to build the release, even mine is x86.
I succeeded recently in building a release on a Apple Silicon Mac inside a UTM emulated x64 system (I used Ubuntu Server, to not waste cycles for a GUI when emulation is so slow). But for this particular service we don’t care about performance, so it was more convenient to use Docker with Erlang built from source with the JIT disabled. The
kerl flag I used in my Dockerfile was:
ENV KERL_CONFIGURE_OPTIONS "--disable-jit"
This allows to avoid the qemu Erlang JIT issue.