Gitpod instead of CI?

For a few years now, I have always been annoyed at how much effort it takes to tune caching in CI so that it can get close to the rebuild time that devs are used to in local development. At one job, I had developers just log into a dedicated ec2 for their branch and run builds, tests, and preview deploys by ssh’ing in and running commands. It was amazingly fast and I didn’t have to come up with and maintain complex CI caching rules. It was slightly economically inefficient, but the team was small.

Going back to my favorite setup, I wonder if anybody has tried trusting developers, forgoing CI, and just requesting everybody run the test suite inside of terminal in Gitpod before merging? I think Gitpod could be a nice compromise between my former ec2 setup and cheaper containerized environments, without having to maintain some complex caching rules in a CI workflow language.

Note: I have done most of this thinking at various jobs while I wait for CI to finish before merging my PR’s.

2 Likes

At my previous jobs I was using custom Gitlab docker runners, setting cache on them is as simple as adding a few lines in the config, you can even configure if you want to store cache on a remote location. I was running tests and credo styling on big projects (more than 40 dependencies and 200+ source files) in 30 seconds each, it takes longer for runner to start the docker environment than to run the pipeline. You can deploy your runners on any instance that is running linux, and since the system is using a pull system you don’t have to worry about any proxy configurations.

The advantage of using a full-fledged CI is that you can also run integration tests, witch is very important in a lot of projects.

1 Like

You may be interested in https://earthly.dev/. Several official packages in the Elixir/Phoenix ecosystem use it.

There are also things like dev containers (Create a development container using Visual Studio Code Remote Development) that sets up local Docker containers for development and VSCode connects to the container. It’s not CI/CD specific but you can create local containers that more easily “graduate” to production environments because you’re using the same tooling locally.

I haven’t used earthly or dev containers beyond throwaway projects. Gitpod is absolutely an option, but I’m wondering what you’re looking for exactly. I use CI/CD for automated test suites and deployment. Developers can run tests locally but CI is a means of verifying we haven’t broken the build before deployment. If your team is small enough you can always trust them to not break things but as a developer on Laravel at the moment, I have CI in place because I can’t even trust myself. My team also primarily uses it for the automated deployment step as some of our older projects lack meaningful test coverage.

As the person on my team tuning caching for builds, I understand your pain points. The biggest bottleneck for us was the network transfer of OS and framework packages. “Baking in” OS updates and common framework packages to the container running on Gitlab CI reduced build times from 5 to 1 minute. That alone was a huge win considering how many jobs have executed in the years that has been in place. We’ve likely saved days, weeks, or months by now. Using a dedicated runner instead of the shared runners also greatly reduced wait times. We do cache framework packages but that’s done by branch so new feature branches take longer as there is no cache for the initial push/pipeline job.

Are you doing things like hosting a network-local instance of Hex with your common packages? That’ll reduce the time for that initial build and dependency download. There may be more in-depth caching strategies around building the container that can be a little less painful overall. If my team weren’t so pleased with Gitlab, we’d be looking at earthly or GitHub actions now that they’re rolling out dedicated runners there as well.

2 Likes

I didn’t implement the caching logic at my current work, but we are starting devise subtle caching rules for the final steps in the pipeline like determine whether generated graphql or internationalization files should be cached for front end builds. The cache keys for those items near the end of the build get more complicated and subtle and are likely to become broken as the code base changes and people forget to update the cache key when the source code is rearranged. Our GHA workflow file is slow moving from some people understanding it to maybe one person understanding it.