Network optimization: 4x WS message size reduction when `push_event`

The slow network is known to be an Achilles heel of LiveView’s architecture.

Recently, I was working on creating a fast rendering map with 12,000+ points that are coming from DB. My server sent data in chunks of 4000 and it worked great on my localhost. But when deployed, network made it super slow.

I was able to reduce the WS message size with my data by 4 times (from 400kb to just 100kb) by using compression technique:

On the Elixir side,

  defp compress(data) do
    data
    |> Jason.encode!()
    |> :zlib.compress()
    |> Base.encode64()
  end

and on the JS side:

const decode_base64_decompress_to_json = (base64string) => {
    let base64_decoded = Base64.atob(base64string)
    let charCodes = [];
    for (let i = 0; i < base64_decoded.length; i++) {
        charCodes.push(base64_decoded.charCodeAt(i));
    }
    let inflatedData = pako.inflate(charCodes, { to: 'string' });

    const data = JSON.parse(inflatedData);
    return data;
}

I think this technique can drastically improve communication between the LiveView process and client-side JS when both are passing medium to large amounts of data. I found execution time of both functions + increased bundle size from JS dependencies (pako + Base64) to be negligible compared with the performance boost they generate.

I am happy to work on PR, though will probably need some support.

5 Likes

Interesting idea! Maybe this and other properties of the LV data flow between client and server could be represented as a pipeline of plugs

When you talk about a “fast rendering map”, what is this exactly?

One point of view might be that there is a rough upper limit on the amount of elements visible on the screen at any one time (and likely to change), and so techniques that take into account the current client viewport somehow might be one way to cut down on the traffic

(I haven’t seen this done though!)

1 Like

did you try websocket: [compress: true] ? - Phoenix.Endpoint — Phoenix v1.7.11

not sure what the default is.

3 Likes

Thanks!

By fast rending map I meant a Leaflet map that adds 12,000 markers + generates popups for them. I wrote about it here in detail: Performance optimization when adding 12,000+ markers to the map that renders fast with Elixir, LiveView, and Leaflet.js - DEV Community

Yes, your solution makes a lot of sense! that’s how tiles work. But my task was to have this “overview” map where I essentially need to load all of the data from the start

1 Like

I did not know about this option. Will check it out soon

1 Like

Added compress option to `socket “/live”, Phoenix.LiveView.Socket, websocket: [connect_info: [session: @session_options], compress: true], but did not notice any change in WS message (with data) size

did you try without your manual compression? obviously double compression yields no benefits - but compress: true should do exactly what you are doing manually (at cpu/compression cost)

Yes I tried to measure with compress: true in endpoint without my compression. There are 3 WS message sizes weighting 430, 412, 419 kb.

I did not notice any changes with compress: true enabled on message sizes. Maybe I am adding it in wrong place :confused:

I know this may sound silly, but it could be the way it’s being measured. I know firefox / chrome devtools measures the transferred and total bytes for http, but only do total bytes for websocket on my versions.

My websocket payload for a single request is approximately 90kB as measured by devtools. Measuring network traffic with btop on localhost, the total transfer was 467 kB after roughly 50 requests, so compression is definitely enabled for me.

Endpoint.ex

  socket "/live", Phoenix.LiveView.Socket,
    websocket: [ connect_info: [session: @session_options], log: false, compress: true]


2 Likes