Hi there ,
I started building an open-source platform for localization inspired heavily by GitHub — projects are Git repositories that we need to have access to from a Phoenix app that users interact with. I don’t have prior experience designing systems of this experience so I checked out how GitLab is doing it, since they are solving the exact same problem, and came across their Gitaly project, which they use to set up and scale the infrastructure that stores and provides access to Git repositories. The project got me wondering how that’d map to Elixir/Erlang world, where some challenges they solved there, like communication between clients and servers via RPC, might already be solvable with OTP blocks.
The system that I envision that I’m sharing here for feedback and advice, would consist of a Phoenix web app, and servers with storage units attached to them that act as Elixir nodes the Phoenix web app can interact with for UI-driven interactions (e.g. present a directory structure from the repo, create branches). However, I have some questions that I’m struggling to answer due to my little experience with OTP, Erlang, and Elixir:
- Can I deploy new versions of the server Elixir apps with no downtime? One of the appealing traits of Erlang is being able to do hot deploys, but I haven’t found much literature on this. Is this a recommended path? Would you do it differently?
- I’ll need to auto-scale the servers as new repositories get created, keeping track of the nodes in the database associated with the Phoenix server. I was considering using Kubernetes as an orchestration tool, but I was wondering if I should go for something simpler. For example, use APIs (e.g. DigitalOcean APIs) to create new droplets, and through SSH publish new versions of the servers using Elixir releases.
Overall, I’m curious to know what your thoughts are on the system proposed above and if you’d do things differently.
Thanks in advance.