Just beginning to explore Phonenix + LiveView, so apologies ahead of time if this has been answered elsewhere or the terminology is wrong.
We’re looking at building a web page using LiveView and we’re wondering if it’s possible (and reasonable) to have some of the page updated using LiveView, but to then have a sort of “back channel” whereby the client can push raw JSON down to the server and the server can push raw JSON back to the client outside of the LiveView rendering.
As an example, we might have a page that includes the usual supplemental views like a header, navbar, sidebar, footer, etc… But the main component on the page is a canvas that the user can draw or whiteboard in. We’d like to update the supplemental views using LiveView, but not have LiveView touch the drawing component. For that, we just want to push JSON down the wire at our discretion (representing the state of the canvas) and then receive JSON as necessary from the server (representing state potentially changed by other collaborators.) We’ll do our own reconciliation of state on the client to update the whiteboard accordingly.
One likely implementation is for us to just explicitly open our own WebSocket back to the server, we think, use Elixir Channels to marshal data back and forth. But is that separate WebSocket/Channel completely independent of the LiveView socket and its context? (i.e.: Do we lose the ability to push JSON down to the server and then update both the LiveView socket and our back-channel socket at the same time?)
Alternatively, we noticed that on the Phoenix.LiveView.Socket page, there’s a brief mention of sharing the transport connection between regular Phoenix Channels and LiveView processes. That sounds exactly what we’re talking about, but the documentation is a bit light there so we were wondering if there are examples or additional resources available that dig deeper into this topic.
TL;DR: On a LiveView page, is there a recommended way to have a sort of “back channel” to send and receive raw JSON between the client and server outside of the normal LiveView rendering flow?
Thanks in advance.
I’m still new so take most of this with a mine of salt but what are you looking to do client side? Are you in control of everything or are you looking to interface with an existing library that manages the communication. I don’t think that sort of library exists in the ecosystem but it may in JS land. It would also make sense if you’re using a frontend or other backend framework where you’re looking for an agnostic way to share communication.
If you’re looking for more 2 way communication of arbitrary data you would probably go a very long way with JS hooks and push_event. Your server side can push data down to the client, think when someone joins a channel on a chat server or maybe a JS component emits an event you can use a hook to send data back to the server. You’re leveraging channels and Pubsub technically but in this scenario you’re not trying to spin up new channels. It’s not that you can’t take that approach, I’m just wondering if that’s the path of least resistance to start.
The docs for channels Channels — Phoenix v1.7.3 mentions broadcasts so if it were me I’d reach for a dedicated channel if I needed to broadcast changes among all clients, like say a shared whiteboard where everyone needs to see the same result. If your primary use is for a single user on a single page but send json back and forth to a component then I’d reuse what you have instead of create a separate connection. Beam processes are very lightweight but if you don’t need the overhead, I’m not sure what channels give you that you couldn’t get with hooks.
Exactly correct, hooks for sending data for custom js components of your liveview, and channels for a more generic interface where you would want to send some realtime events to frontend.
Thanks for the commentary.
I use the example of a whiteboard/canvas because it’s the closest example to what we’re building. Basically a component where users can place other components, drag them around, resize them, connect them as if they were nodes, etc. That’s a lot of real-time user interaction that doesn’t seem appropriate for LiveView.
This component has its own state, which is easily represented in JSON. As the user updates this component, we just want to “flush” the state down to the server to persist it. That part is straight forward.
The second part involves sending updates from the server back to the client. This could be a result of a collaboration session, but (in our case) it could also be because the server has computed some values and whats to broadcast that back to the client.
It’s not entirely clear to us where, within the LiveView flow, can we just push JSON back to the client without doing a LiveView page render and have the “diffs” sent upstream (of which there wouldn’t be any because we don’t want to update the whiteboard’s HTML on the server, only on the client).
I guess what I’m trying to say is that if the user has a LiveView page “open”, how and where does the server push an unsoliciated payload back to the client?
In another world, we’d likely just have a dedicated WebSocket open that both the client and server communicate back and forth on. We’re just a bit confused within the Phoenix/LiveView world how to go about this using Phoenix idioms rather than opening up a second, dedicated WebSocket and using that.
You’d want to use LiveView’s
push_event/3 to push JSON to the client.
I’d suggest reading through the Client-server communication section of LiveView’s JS interoperability guide. There’s an example of pushing chart points to a chart component on the client. I also want to point out that
phx-update="ignore" might come in handy to not overwrite the client side whiteboard/canvas. See this guide for more on that: DOM patching & temporary assigns — Phoenix LiveView v0.19.1
Great, thank you. Not sure how we overlooked that rather relevant “Client - Server” section in the docs… We’ll go through those docs again. Sometimes things make more sense the second time through. Much appreciated.
It’s also possible to use Phoenix socket and channels, without Liveview part, and use a js component that has an internal state.
I have done one of these collaborative whiteboard/canvas, with React and Phoenix.
Instead of morphing dom with Liveview, I pass websocket messages that changes the state of the client. React does the reactive part of updating the UI on state changes.
I prefer this approach when the js client is complex. The client has the state, the server knows nothing (or can have a state too, like in multi player game) and just pass message