PhoenixLiveView push_event with binary payload

I am trying to use the LiveView JS interop with compressed data. For example:

params = %{test1: "value", test2: "value2"}
{:ok, compressed} = params |> Jason.encode! |> :brotli.encode
socket_event = push_event(socket, "my_hook", compressed)

I get an FunctionClauseError when trying to send this, as push_event only accepts a map as the third argument. Is the only supported way to do this right now creating a wrapper map like:

params = %{test1: "value", test2: "value2"}
{:ok, compressed} = params |> Jason.encode! |> :brotli.encode
payload = %{"data": compressed}
socket_event = push_event(socket, "my_hook", payload)

In other words, there a way to define a custom serializer for push_event or is creating the wrapper the only way this will work right now?

What exactly do you want to do? If you just want to save bandwidth in Liveview, you can enable diff compression in Liveview:

If you want to send arbitrary binary data you will have to Base64 encode it. Liveview use JSON for serialization and it does not support binary data that is not a valid utf8 string

I’m using hooks to send data that is rendered by a javascript library. The rendering output is ignored by LiveView so it isn’t re-drawn on every UI change and is only modified the push_event. The data that I am sending can be large on the first request, and I want to lower the amount of time it takes for the first render. I’ve tried a few things like MsgPax vs JSON vs erlang terms, and various compressions against those serializations, and it looks like JSON + Brotli is a winner in terms of size + implementation effort + serialization/compression overhead.

Using the compress option is a good suggestion, thank you for that. However looks like Cowboy isn’t planning on supporting Brotli on the WebSocket level: Implement gzip and Brotli in Erlang? or as dirty NIF · Issue #1092 · ninenines/cowboy · GitHub . I’ve done some measurements and Brotli wins for my use case and would prefer to use it. compress is now my fallback if the wrapper map becomes too cumbersome. I’d prefer to use build-in configuration option that is similar to compress or adding a custom serializer that can be limited to push_events that can implement Phoenix.Transports.Serializer.

I had similar thought but then I realize in my typical environment, the transfer time of the packet is dwarfed by the downstream processing time in javascript. So I am not sure it will result in a visible net gain.

Good point. It probably makes more sense to use a stream for this instead of a push_event, so the rendering can happen as the data comes in instead of waiting for all of the data then starting rendering. Thank you for the perspective.