I currently run a phoenix application that might get a lot of trafic in the future: it (currently) works fine with a single server, but I have to make plan for scaling. I was thinking about a simple Green/Blue deployment scheme as a base to improve reliability and be able to add more nodes if I need to. The “standard” way would be to have a load balancer and independent workers but since elixir runs on BEAM I can use distributed erlang.
Is it worth it? Can I keep a load balancer and multiple endpoints to handle failing nodes?
Yes, this is possible. The main thing that might hold you back is the way your persist your data.
In a distributed system, what your ‘single source of truth’ is is a difficult problem to solve. Often, there is not a single source of truth.
It is perfectly possible (and works great! Features like Phoenix Presence are written exactly for these kinds of setups) to set up a distributed Phoenix application. However, how do you manage your data? The answer to this second question is very app-specific.
The data won’t be an issue: uploads will be stored on an NFS share and the content on an external postgres database. I’m more curious on what is shared in a distributed setup, and what happens when a node fails: what do I gain by connecting the two nodes ?
The recommendation is to still deploy those nodes completely independent from each other and have the load balancing work completely independent, as you would with any other technology. The reason to use Distributed Erlang is only if you are relying on a particular library that builds on top of it, such as Phoenix.PubSub with channels and Phoenix presence, and those libraries will already be designed to handle node failures and what not.
I think the best learning resource that partially answers your “What do I gain by connecting nodes”-question would be the Distributonomicon chapter of ‘Learn You Some Erlang’ (it uses Erlang syntax, obviously, but all advice in there is equally applicable to Elixir).