Deploying liveview clustered behind a load balancer

Hi,

Potentially dumb question, I am not an expert on networks and load balancers.

When a Live view starts there are two network calls. The initial HTTP to the phoenix server, and then the web-socket connection. If I deploy to a 2-node cluster that is running behind a load balancer is it possible that the initial http request is routed to Node-1 and the web-socket connection is routed to Node-2?

  • Have I basically misunderstood how this works?
  • Is it a problem?
  • Is there a standard part of the network protocols that fixes this?
  • Is there a standard load balancer setting that fixes it?
  • Is there a non-standard load balancer hack required?
  • I there some Beam clustering magic that routes stuff internally?

Thanks!

Tom

It’s not a problem. I have this set up in couple places and provided your load balancer supports WebSockets things just work as intended.

This is because LiveViews do not preserve the state between the initial server-side render and the second time it’s initialized after websocket connection is established. In essence, every LiveView gets initialized twice. So, unless you do some crazy things like starting locally registered name processes on one node, that are not visible on another node between these two renders, things should be just working as expected and that’s precisely my production experience.

5 Likes

@hubertlepicki’s answer is perfect, so I’ll just weigh in to say: We do precisely this (liveview behind a load balancer) and it works just great. There is no need at all for the static render and the live render to happen on the same server, and no hacks are required to make that OK.

2 Likes

Not load balancer specific, but what if your live view page lists 10 most recent news. Between server-render and websocket connection initialize the list of news has changed. Does the page content change for the user?

Is there any way to avoid it? Like somehow transmitting the most recent news datetime from rendered HTML to liveview so the server knows to respond with the “old” list of news?

You could store the datetime of the initial request in the liveview session and if the second connection happens in a certain timeframe use it to limit the results by. But this will only help for new items, not e.g. updates to existing items. But be aware that a reconnect half an hour down the line will start the exact same way, so you‘ll likely want to fall back to fresh data after some threshold.

1 Like

Only wanted to add that this is part of the consideration around how mount/3 gets called twice. First call might not be on the same node as the second.