I am currently digging in some CI scripts using github actions. They seem to be installing elixir/erlang using asdf and cache them afterwards. It seems they work pretty well currently, however this makes me question whether it is smart to compile OTP from source in a CI environment instead of using a pre-built image?
There are few clear benefits to the asdf approach:
You can use .tools-versions to keep the versions for both dev and CI envs;
You don’t have to rely on pre-built images availability, this can be both for older versions of OTP or bleeding edge ones.
My main concerns when I saw usage of asdf is: how controllable the env where OTP is compiled is? I can see how this can easily backfire when you want to use a older OTP version that doesn’t support openssl 3+.
I’ve took a look at some OSS projects like phoenix and it seems they are usingerlef/setup-beam@v1 usually:
- name: Set up Elixir
uses: erlef/setup-beam@v1
with:
elixir-version: ${{ matrix.elixir }}
otp-version: ${{ matrix.otp }}
Any thoughts on this? My aim is to find the version of config that is the easiest to use and configure.
Damn, I was thinking about writing a bash script to achieve the same, I guess I should have started by reading the action documentation . This is great, thanks a lot!
BTW, on this topic of using .tools-versions, are you using .tools-versions to set the docker image version for building a release too? (assuming you are using docker containers for deploys).
You can also use a version matrix to test the project with multiple erlang/elixir combinations; although it causes some issues with ‘mix format —check-formatted’ as different versions use different formatting. Still need to fix it by making that check depending on the version.
Also: instead of asdf check mise; switched last year and never looked back.
Does pretty much the same thing, after compiling deps and source-code for a mix.lock, you will always have collision when trying to cache build, meaning that you will always have to re-compile the new application code until mix.lock changes, or I am missing something?
To be honest: it’s on my todo list to see if it works. Wrote it years ago and as having the best CI was not my main goal, I went along once it did run reliably.
Feel free to copy the script and check / improve (and let me know)
You can notice that the new cache is not saved as that key already exists. I am coming from the gitlab world, where we can update/delete cache, so I am always about caching the latest version, I don’t really dig this immutable cache ideology.
Looking at the official documentation for cache action I found how I can achieve that:
A cache today is immutable and cannot be updated. But some use cases require the cache to be saved even though there was a “hit” during restore. To do so, use a key which is unique for every run and use restore-keys to restore the nearest cache. For example:
name: update cache on every commit uses: actions/cache@v3 with: path: prime-numbers key: primes-${{ runner.os }}-${{ github.run_id }} # Can use time based key as well restore-keys: | primes-${{ runner.os }}
Please note that this will create a new cache on every run and hence will consume the cache quota.
This key will ensure that you will be able to generate new cache for each new run of the workflows. I am using it the following way currently:
For my current use-case the only thing I care about is that pipelines run as fast as possible, the amount of storage used by cache is irrelevant, not to mention that you really have to cache a lot to hit the 10GB quota for free projects. We are also running this stuff on self-hosted runners for this project, so you can say the amount of space for caching is unlimited .
I use the same strategy to cache the PLT for dialyzer and the pipeline is able to run in 20s, well the project is empty yet but I think it can be kept under 1 min easily.
erlef/setup-beam works well for windows & Linux but sadly it does not support macos. I used to use asdf for macos but it was too slow. It used to take around 30m to install Erlang & Elixir.
Few months back I switched to Nix based pipeline and it now takes <1m. Nix has its own caching mechanism which works pretty well.
I have a sample config using asdf I was provided. The initial compilation is pretty slow here too, but the idea is to heavily rely on cache. If you do that, you only have to wait for long when you are switching between erlang versions, which is not a thing you do often for applications, for libraries where you want to ensure compatibility between different versions I agree this is still a problem.
This might be worth investigating, as I hate this immutable cache action that is provided by default by github, really makes something that should have been easy into an ordeal.