Deploying a release from an ARM laptop

Building on an other server, gitlab or GitHub are definitely options, but what they’re probably doing is building on docker. That’s what I would suggest. Docker seems like a good option.

1 Like

A $4 VM would increase my monthly costs by 20%. :sweat_smile: I guess I’ll look into doing it in GitLab CI. Thanks!

I thought Docker is just a virtualization environment, and thus I couldn’t run amd64 containers on ARM unless I went through something like QEMU which would be deadly slow, right?

I think it use QEMU in the back yes, if it’s dealy slow I have no idea. I think that’s something you need to test for yourself.
Anyway if it’s just for building the release, I can’t imagine that even a 2-5x times slowdown is a deal breaker?

We use Hetzner too. With their CLI tool, you can deploy a new VM, install OTP, rsync your program, release, download, shutdown and you’d incur in 15 minutes of usage total… so just a few € cents. :grinning:

3 Likes

Check out https://getutm.app (which builds on qemu) for running OSs with different architectures on Silicon. But really the easiest is using CI for that, especially if building x86 which those services tend to run on (I use CI for x86 builds but actually also build releases on my Mac to deploy to ARM servers since that takes minutes instead of 1+ hour on CI which uses qemu for it, tbf a lot of the slowness may be due to Rust NIFs rather than the elixir build though)…

From the tags of this post, I guess you are using Apple M1.

I’m using Apple M1, too. And I has the same requirement of deploying to x86_64 machines.

My solution:

  1. use GitHub - utmapp/UTM: Virtual machines for iOS and macOS
  2. create an x86_64 emulator. (not a virtualizor)

One con:
Because it’s an emulator, the building process is very slow. Before building, prepare some drinks and drink slowly. :wink:

EDIT: After I wrote this, I found @mayel had already wrote something about it.

1 Like

Use docker multi arch containers. Setup amd64 and build app inside it

1 Like

@c4710n I’m curious how slow? I’ve seen how slow emulating ARM on x86 is (using github actions specifically) but haven’t actually tested building a release for x86 emulated on ARM.

In my case, I’m using:

  1. an x86_64 linux emulator
  2. a docker service running on above emulator

When building x86_64 docker image, I use above docker service.

On my MBP 2015, the build can be done in 4 min. But in the emulator on MBP 2021, the building process can slow 4 times, I think. (I didn’t measure it precisely. All I know is that my tea will be cold after the build. :wink: )

2 Likes

That’s exactly the reason why I push all my builds to CI from day 1. Usually the free plans are enough for my private stuff and for non private stuff a few bucks a month should be doable. No need to worry about where and how I do my edits, could even be the built in editor of the service hosting the repo for what it’s worth. I totally see that being a bit more work upfront, but it’s documented in code and rather flexible.

2 Likes

Not only slow, but I’ve never got it to actually work for me. It always ends up crashing for some reason.

I’m in the camp of just letting your CI build the images (we use GitHub Actions).

Kind of an aside, but related… BuildKit lets you make multiarch images pretty easily, using a Kubernetes cluster to farm out the build jobs to machines of different architectures. My M1 is so fast though, that I like to do the ARM build locally though. I have a little bash script to help with this.

Build ARM locally, then use a BuildKit Kubernetes builder to build AMD64 and mash them together in to a multiarch image:

# mbuild <dir> <repo> <tag>
# mbuild elixir $AW_REGISTRY/1.14.0-erlang-25.0.4-alpine-3.16.1 latest

set -e

TAG=${3:-latest}

docker build $1 -t $2:arm64
docker push $2:arm64

docker buildx build $1 \
  --push \
  --platform linux/amd64 \
  -t $2:amd64 \
  --builder aw-amd64-builder-

docker manifest create $2:$TAG \
--amend $2:amd64 \
--amend $2:arm64

docker manifest push --purge $2:$TAG
1 Like

Thanks everyone for the answers, I hope they are also useful for any future readers that might find this thread. :slight_smile: I ended up building the release in GitLab CI which was surprisingly painless. I haven’t tested deploying the artifact yet, though, but it looks sensible enough when extracted. :grin:

2 Likes

Oh, and here’s the configuration I ended up with (permalinked to the current version, so maybe check if it has changed if you come from the future): .gitlab-ci.yml · e037b3c8245f2db487a13539062de726749b6156 · CodeStats / code-stats · GitLab

Let me know if you think I could do something better!

2 Likes

Very very slow :wink:

I’m in a similar situation, my cloud infra is x86_64 (Ubuntu) and cannot just build on my M1 mac mini. So I also tried UTM on M1 with x86 emulation … its was pretty much unusable for builds.

So I run my builds on my 2015 intel MBP, which runs UTM (virtualization) for the same architecture. The UTM instance is running Lubuntu and my builds are way faster than on M1

1 Like

I build releases for ARM systems on an x86 build server.

I take the files for the ARM system’s ERTS, place them on the build server and then reference them in the release I am building.

Here is a snippet from the mix.exs, “releases” section:

  releases: [                                                                                                               
    sra_rps: [
      overwrite: true,
      include_executables_for: [:unix],

      # Keep the docs
      strip_beams: [keep: ["Docs", "Dbgi"]],

      # Path to the ERTS copied from ~/.asdf/installs/erlang/24.2.1
      include_erts: "/usr/local/share/erlang_rps/erts-12.2.1",
      ...

The only provisio is that building any NIFs or port programs becomes much more complicated. The only port programs I have are very stable, never needing changes, so I include compiled versions as binaries in priv.

3 Likes

I was reading how others are doing it, and here’s what I gathered.

  1. We can use GitHub Action-like services.
  2. We can build directly on the server and replace the instance.
  3. Blue/Green deployment using 2 VPS. (Spin up a new VPS, that is identical to the running VPS, and deploy in that VPS like it’s staging, then replace the running VPS with the newly spawned one or move the artifact across. (Shutdown the VPS we are not using))
  4. Blue/Green deployment, but using tools like Packer & Terraform.

We can use tools like SCP, S3, CROC to move our build artifacts across servers.

P.S. Still learning by experimenting with all the ways to do it.

1 Like

I always have a pc at home, powered-on, reversed ssh tunneled to one of my VPS. So I do not use my MacBook to build the release, even mine is x86.

2 Likes

I succeeded recently in building a release on a Apple Silicon Mac inside a UTM emulated x64 system (I used Ubuntu Server, to not waste cycles for a GUI when emulation is so slow). But for this particular service we don’t care about performance, so it was more convenient to use Docker with Erlang built from source with the JIT disabled. The asdf / kerl flag I used in my Dockerfile was:

ENV KERL_CONFIGURE_OPTIONS "--disable-jit"

This allows to avoid the qemu Erlang JIT issue.

1 Like

Looks like there’s a better way. This didn’t work for me on Erlang/OTP 25.1.1 but seems to have worked on 25.3.2.2

I’m using Docker in buildx mode to create releases for amd64 - I wouldn’t call it fast but it’s adequate