Clustered LiveView app web socket

The state of the initial mount is lost after the render completes, this is the same as a request to a controller. The js then initiates the new request which will become the websocket connection managed by the liveview process on the server. For recovery after a disconnect it will first build the initial state again in mount, then there is code that will resent the form content if any to update the state on the server.

1 Like

Yes, that state is lost today, but does it have to stay that way?

After dispatching the rendered HTML the code could tuck away the state somwhere the second call may retrieve and resurrect instead of rebuilding it from scratch. How the second request is handled proves that it is possible to do do extra things after dispatching the rendered HTM including staying alive for a while longer. I can think of several mechanisms by which to get the state transferred from the process for the first to the process for the second request, including ones that will cross machine boundaries if required.

I’m happy to skin this cat another way if my use-case is that unique. I simply thought some might find it useful if instead of assuming mount/3 will be cheap enough to repeat the platform did what it could to retain the state built to service the first request as initial state for the second request and subsequent stateful web-socket processing.

You’re correct that there could be means to retain data, but it comes with all the complexities of caching, where store it, how long store it, …. It’s also slightly problematic to talk about first and second render, because it’s not two connections, it’s n connection. A single static render might connect over websocket multiple times in case of a connection drop. So do you retain the state of the static render up until a potential connection dropout, which could be any time later not just close in time.

You can also read more on the complexities here:

Thank you!

I’ve come from such a different angle I didn’t expect ā€œAllow LiveViews to be adoptedā€ to be related, yet it’s the exact same topic I’ve been pondering. The issue appears to remain open but with little activity after the initial discussion the interest in addressing it may no longer be present or pressing enough.

Shall I jump on there with how I though it could be done to be critiqued and maybe probe current interest levels?

Just because there’s no activity doesn’t mean there’s no interest – it’s rather priorities I’d imagine. If you think you have something to contribute that’s probably the best place to do so.

Unless I’m gravely mistaken, I don’t see what I’ve proposed to suffer from the uncertainties and complications you’ve mentioned.

It remains to be seen if it’s worth anything but I went ahead and described my Proposed Session Handovers for LiveView on the issue

Sorry to say, @LostKobrakai, but that devolved into an outright disaster. I bowed out. That issue and the problem I discovered and solved for myself must have none of the overlap we both assumed there to be.

I don’t think there’s no overlap. You’re just fine with making tradeoffs the LV team is not fine with making – at least that’s my pov from following this from the sidelines :slight_smile:

Could be that, could be we haven’t had enough shared experiences for them to assign any weight to what I reported as actually happening in production clusters right now while they worry about what could happen in clusters when programmers don’t think. Been interesting though.

Thanks for engaging and encouraging me. Next time don’t just lurk on the sidelines, yank me out of the fire earlier, OK? :grinning_face: