Phoenix Channels horizontal scalability

Hi Everyone,

I have a question regarding horizontal scalability of the phoenix channels.
We are currently using phoenix channels to deliver realtime information to our app’s frontend.
A large portion of this data is a continuous flow of updates from an external system. Something like GPS updates.
Right now, everything is running real smooth with a few hundred concurrent users on a 2 small servers (using PG2 adapter).
We are preparing the software to deliver 100 times the data to a few 10 thousand users.
However, a specific subset of users will only need a specific subset of the data.

If I understand correctly, if we scale out to 10 servers, all servers will need to receive and process all that data, because the users’ sockets are spread out over all instances of the phoenix app.

So my question is, isn’t there a point where horizontal scalability ends because the data load is too high, and would it be a good idea to create separate clusters of the phoenix app, serving the sub-set of data to the subset of users? Or is there an easier solution I’m missing?

1 Like

Probably. You can find this point with some load testing. Tsung supports websockets, so you can model your users’ interactions with it.

Notably, Phoenix also scales vertically extremely well. You’ve probably seen this:

The core concern is sensible, in that the erlang distribution scheme does have limits to how far out horizontally you can scale a cluster (10 however should be fine). However, the load you should be able to handle on a reasonable number of servers should probably last you for a long time.

1 Like

I just made a playground for this, take a look:

1 Like


A proper way to load test though would be not docker on a laptop, but a couple of vms in the cloud … Just in case it comes in helpful, I have a demo libcluster+digitalocean+terraform project. It can be modified to support phoenix, and then something like tsung can be used to actually load test it.

Also might still come in handy.