I would like to start using a CI provider for a couple of Phoenix projects I have in mind and was wondering if anyone has any experience of using a CI provider with Phoenix that they would be willing to share?
I’ve never done any CI before, and I guess it may be overkill for a single ‘hobbyist’ developer like myself but I have a couple of reasons for wanting to go down this route:
- The projects I have in mind have the potential to grow to requiring additional developers, and so I would like to prepare now in order to have a good understanding, and first hand experience, of the CI/CD landscape should it be required if the projects are successful.
- I’m selling my non-I.T. related business and would like to look for some Dev work at some point. Therefore, I would like to add some CI experience to my toolkit.
I have taken a brief look at CircleCI over the weekend and will be looking at the other options available as my time allows.
I’ve heard good things about CircleCI but I’ve been through their forums and there seems to be a lot of unanswered questions, so not sure how helpful they and their community are, especially for noobs like myself. Their ‘hello world’ intro doesn’t seem to work as per their docs either, although I will be taking another look. There also seems to be some confusion among their community about the available plans that isn’t being clarified. All in all I’m a little disappointed with what I’ve seen. Appologies to CircleCI if any of my comments are inaccurate or just plain wrong, this is just my first impression, I will come back and update this should that impression change.
I will be using Cypress and Percy for E2E testing along with the normal unit and integration tests for Phoenix and Elm.
Any experience/advice that anyone can share regarding this topic is greatly appreciated.
Personally I’ve only used 3 services so far:
(Well, some more, but they have been either internally hosted by a company, or even homebrewn by them, or its ages ago and those services might not be available anymore or changed fundamentally)
As most of my personal projects are currently at gitlab, I use its internal CI most of the time, as it integrates very well in the UI and general UX of gitlab. I like that I can use docker images to run my builds on and therefore I am able to control many of the aspects of the environment I build in.
When I’m at github with a project I try to avoid travis at all costs, as it provides pre-made environments which you have to bend to have them like you want, this often costs valuable build time if you need to install proper tooling first (or even compile from scratch). Also since there is no pipeline/workflow but only single script, which might be run within different environments according to a build matrix, things might get pretty involved. Also I have not yet found a way to only do a deployment when all builds in the matrix have succeeded.
Because of this I tried circle a few weeks ago on a fork of an open source project. I didn’t miss anything in the docs, but I didn’t do the hello world or any other tutorial either… Its approach is pipeline/workflow based as gitlab, but with even finer grained control but therefore also a lot more complicated configuration format.
Both, travis and circle integrate in github as good as the github API allows. Still, one has to leave github for a lot of things or viewing the logs.
Having said this, I’m waiting for GitHub Actions, they might actually be the thing that brings me back to GitHub.
For people without CI/CD or sysadmin experience I’d suggest looking at https://buddy.works/. It still has bash scripts in all the places you need them, but quite a lot of the “boilerplate” setup is generalized in a proper interface so you can get started without heaps of bash/yaml/mix of both.
Thanks, I’ll spend some more time with Circle, it has an Orb for Cypress which should simplify a few things I think, and then take a look at Gitlab.
Appreciate your time to explain.
Thank you, I will take a look, it may be a good place to start in the short term - and a way to improve my bash scripting.
Thanks for the input.
I have used pretty much all of the CI providers, both commercially and for hobby.
Gitlab CI is one of the best IMHO, since they offer free private git repos and very generous CI credits.
Github now offers free private repos, so if you use github you can connect this to pretty much any CI/CD process now.
Semaphore recently reduced their pricing, and their free plan is very reasonable. ($10 free credits, which is like 1300 minutes). I quite like it.
GitHub is coming out with Github actions soon, and will be a gamechanger IMHO.
Buddyworks was pretty great when I used it, but their pricing is pretty steep now.
Azure Devops is a nightmare of a UI.
I also recommended to bump up your CI server CPUs, as Elixir compiles much faster with multiple CPUs (but there is a limit, lol). Experiment and see what works.
I highly recommend checking out using Docker for CI in the medium term, it’ll make your builds much faster and reproduceable, but if you want something simple just start simple.
I would recommend GitLab CI because I think there are no competition exist for a such offering. I use the cloud GitLab version with self-hosted workers in order to share container cache between builds.
Also, I wish gitlab would be more stable because not all companies can afford to wait couple of hours if something is wrong with the service.
GitHub is also actively moving to the CI ecosystem with its actions offering. Which I think will be great and stable as all from github are.
For my personal / hobby projects I’m fully embraced GitLab with its CI. At work we are using circle ci which I think is good enough. I would like to also explore CI solution from codeship.
Can you clarify this point, please? Is there an amount of CPU cores that’s an overkill for Elixir compilation and from which you shouldn’t go higher because it nets no compilation speed increase?
If you do not use macros then you can spin up as many processes as there are modules. In general you need to find the widest layer in
mix xref graph.
Valuable, thank you. Since I am about to receive a machine with 10 physical / 20 virtual cores, I believe it should get nicely saturated. Will know soon.
By default Erlang starts schedulers in amount equal to physical cores, not virtual.
That might be, but without me touching anything, I am getting 8 on i7-4870HQ which has 4 physical and 8 virtual cores:
Hauleth probably meant “virtual” in the sense of like cgroup cpu limits and not how cpu cores can have multiple threads?
Ah, could be. Not sure it’s relevant for a CI/CD machine though. Maybe it is.
In CI (especially CI as a service) you often don’t get a VM you run in, but containers. Those see the “real” number of CPUs/cores, while they might be limited in using only 1 or 2 CPUs via cgroup settings.
If though you run your tests in a virtualized rather than a containerized environment, this distinction shouldn’t matter mcuh for you.
It could. If your build is run in a container with cpu limits of 1 but the host has 32 then Erlang is going to still bring up 32 schedulers which will likely slow down compilation and anything else being done.
I wonder if that actually happens in all proprietary CI services then. Or do they take special care to pass the proper scheduler count on startup? Would be really useful to know but none of the creators of such services are around AFAIK.
It’ll depend on if the service runs your job in a VM or a container.
I’m not sure what you mean by “they take special care”, are there CI services that have special elixir support beyond like cirrus and github actions which just provide configs that run “mix test”?
No idea myself. What I had in mind is that I wonder if they pass BEAM startup options that modify the scheduler count. I’d really like to know that.