Gitpod instead of CI?

For a few years now, I have always been annoyed at how much effort it takes to tune caching in CI so that it can get close to the rebuild time that devs are used to in local development. At one job, I had developers just log into a dedicated ec2 for their branch and run builds, tests, and preview deploys by ssh’ing in and running commands. It was amazingly fast and I didn’t have to come up with and maintain complex CI caching rules. It was slightly economically inefficient, but the team was small.

Going back to my favorite setup, I wonder if anybody has tried trusting developers, forgoing CI, and just requesting everybody run the test suite inside of terminal in Gitpod before merging? I think Gitpod could be a nice compromise between my former ec2 setup and cheaper containerized environments, without having to maintain some complex caching rules in a CI workflow language.

Note: I have done most of this thinking at various jobs while I wait for CI to finish before merging my PR’s.


At my previous jobs I was using custom Gitlab docker runners, setting cache on them is as simple as adding a few lines in the config, you can even configure if you want to store cache on a remote location. I was running tests and credo styling on big projects (more than 40 dependencies and 200+ source files) in 30 seconds each, it takes longer for runner to start the docker environment than to run the pipeline. You can deploy your runners on any instance that is running linux, and since the system is using a pull system you don’t have to worry about any proxy configurations.

The advantage of using a full-fledged CI is that you can also run integration tests, witch is very important in a lot of projects.

1 Like

You may be interested in Several official packages in the Elixir/Phoenix ecosystem use it.

There are also things like dev containers (Create a development container using Visual Studio Code Remote Development) that sets up local Docker containers for development and VSCode connects to the container. It’s not CI/CD specific but you can create local containers that more easily “graduate” to production environments because you’re using the same tooling locally.

I haven’t used earthly or dev containers beyond throwaway projects. Gitpod is absolutely an option, but I’m wondering what you’re looking for exactly. I use CI/CD for automated test suites and deployment. Developers can run tests locally but CI is a means of verifying we haven’t broken the build before deployment. If your team is small enough you can always trust them to not break things but as a developer on Laravel at the moment, I have CI in place because I can’t even trust myself. My team also primarily uses it for the automated deployment step as some of our older projects lack meaningful test coverage.

As the person on my team tuning caching for builds, I understand your pain points. The biggest bottleneck for us was the network transfer of OS and framework packages. “Baking in” OS updates and common framework packages to the container running on Gitlab CI reduced build times from 5 to 1 minute. That alone was a huge win considering how many jobs have executed in the years that has been in place. We’ve likely saved days, weeks, or months by now. Using a dedicated runner instead of the shared runners also greatly reduced wait times. We do cache framework packages but that’s done by branch so new feature branches take longer as there is no cache for the initial push/pipeline job.

Are you doing things like hosting a network-local instance of Hex with your common packages? That’ll reduce the time for that initial build and dependency download. There may be more in-depth caching strategies around building the container that can be a little less painful overall. If my team weren’t so pleased with Gitlab, we’d be looking at earthly or GitHub actions now that they’re rolling out dedicated runners there as well.


I didn’t implement the caching logic at my current work, but we are starting devise subtle caching rules for the final steps in the pipeline like determine whether generated graphql or internationalization files should be cached for front end builds. The cache keys for those items near the end of the build get more complicated and subtle and are likely to become broken as the code base changes and people forget to update the cache key when the source code is rearranged. Our GHA workflow file is slow moving from some people understanding it to maybe one person understanding it.

All of the positives of CI are worth the effort in my experience but being able to add a bit of Gitpod config to my repo (even just on a branch) and then build the project by appending the repo URL to the Gitpod URL, like so for quick launch is certainly handy! Especially when spinning up other people’s public projects since I just need to fork, add my Gitpod config files for Elixir to the project and type in the new URL for launch.

Previously, I found the build process for Elixir on Gitpod problematic, in that it took too long to rebuild and install and the timeout was 5 min on the free plan so it became annoying if I was just reading some documentation in between bug fixing and had to wait 5 minutes to restart the workspace. The install process I was using also stopped working recently because Erlang Solutions stopped hosting the latest Elixir 1.14 packages for Ubuntu 22.04, which Gitpod uses as a base image. The main reason this is a headache remover for me is that I work in a Post-secondary institution which means we get Windows computers to work locally and while all my deployments for work are on various Linux-flavoured systems, managing multiple versions of Elixir on Windows OS is excruciating and to be avoided at all cost so Gitpod becomes an easy to use and delete sandbox environment that I don’t maintain and more importantly no need to convince my managers to provision and resource on our systems or in the cloud.

So through some perseverance over the last couple of days, I was able to write some config scripts which does the lion’s share of installing Erlang and Elixir using ASDF (which I had never been able to get working before) in a temporary Docker image which saves significant time on workspace restarts because Gitpod caches these and as long as you don’t update the customized .gitpod.Dockerfile, it will pull them back down in 30 seconds or so. I also use the “gitpod/postgres” base image to build upon, which persists the database info between restarts, and so that it is ready for Phoenix Ecto to use. Since your workspace files (files under the Git repo file structure) are saved between restarts, the compiled dependencies only need to be compiled the first time as well. This setup has allowed me to take a large project which could take 10-15 minutes on initial build and restart it any time in less than 2 minutes with the latest repo commits, which of course is the goal of Gitpod and without this advantage wastes more time that you gain.

I made a github gist of my 3 scripts which work in tandem and you can certainly strip them down to be more streamlined for your purposes but I wanted them to work with new and old Elixir projects without having to rejig so they have some backward compatible pieces and some redundancies. Please use, comment and fork publicly for wider community usage.

Elixir (for Phoenix w/ Postgres) Deployment on Gitpod using ASDF and Dockerfile

1 Like