if you want to deploy your a, b and c applications as [a, b] + [c] or [a] + [b] + [c], you will have very little trouble in doing so due to their inherent design and built-in communication.
So each app could be deployed on different serves and they would communicate through the distributed erlang messaging system instead of HTTP. So far so good. The article goes on to describe:
If c requires more instances or the application with user specific concern, then it is reasonable to isolate it and deploy multiple instances of c.
This is actually very interesting as it is a common use case in microservices. You start multiple instances of one microservices and put a load balancer in front of it. When the microservices are using HTTP calls this is sort of simple to realize. But I was wondering how this is done in elixir/erlang when all the nodes are connected directly and not communicating through HTTP.
So we would have one server hosting the application A and two servers hosting application C. How are these system then connected and how are the “rpc calls” to node C load balanced?
The only thing I could find was the erlang pool module: Erlang -- pool Which seems to go into this direction. But I could not find a real example where all of this is implemented.
This is a simple way to distribute load across a set of nodes. If your Node.list() contains nodes that do not have the MFA you want to run you could make a custom function that returns a random node from a group of nodes that do have your function on them.
There is probably some more official way but this is just an idea to get the conversation flowing.
Haha, that is actually a very nice and simple solution which can be wrapped in a helper function. And it would also handle cases where a node goes down and back up.
Of course it does not take into consideration the current load of each system, but a standard round robin load balancers is also not doing this.
I don’t see how to apply a pool manager to the problem of load balancing. A pool manager is used over a pool of resources when the usage of each resource is exclusive and you must check it out until your request for it is complete. Typically with load balancing each node can handle multiple simultaneous requests and a load balancer distributes them.