The slow network is known to be an Achilles heel of LiveView’s architecture.
Recently, I was working on creating a fast rendering map with 12,000+ points that are coming from DB. My server sent data in chunks of 4000 and it worked great on my localhost. But when deployed, network made it super slow.
I was able to reduce the WS message size with my data by 4 times (from 400kb to just 100kb) by using compression technique:
On the Elixir side,
defp compress(data) do
data
|> Jason.encode!()
|> :zlib.compress()
|> Base.encode64()
end
and on the JS side:
const decode_base64_decompress_to_json = (base64string) => {
let base64_decoded = Base64.atob(base64string)
let charCodes = [];
for (let i = 0; i < base64_decoded.length; i++) {
charCodes.push(base64_decoded.charCodeAt(i));
}
let inflatedData = pako.inflate(charCodes, { to: 'string' });
const data = JSON.parse(inflatedData);
return data;
}
I think this technique can drastically improve communication between the LiveView process and client-side JS when both are passing medium to large amounts of data. I found execution time of both functions + increased bundle size from JS dependencies (pako + Base64) to be negligible compared with the performance boost they generate.
I am happy to work on PR, though will probably need some support.
Interesting idea! Maybe this and other properties of the LV data flow between client and server could be represented as a pipeline of plugs
When you talk about a “fast rendering map”, what is this exactly?
One point of view might be that there is a rough upper limit on the amount of elements visible on the screen at any one time (and likely to change), and so techniques that take into account the current client viewport somehow might be one way to cut down on the traffic
Yes, your solution makes a lot of sense! that’s how tiles work. But my task was to have this “overview” map where I essentially need to load all of the data from the start
Added compress option to `socket “/live”, Phoenix.LiveView.Socket, websocket: [connect_info: [session: @session_options], compress: true], but did not notice any change in WS message (with data) size
did you try without your manual compression? obviously double compression yields no benefits - but compress: true should do exactly what you are doing manually (at cpu/compression cost)
I know this may sound silly, but it could be the way it’s being measured. I know firefox / chrome devtools measures the transferred and total bytes for http, but only do total bytes for websocket on my versions.
My websocket payload for a single request is approximately 90kB as measured by devtools. Measuring network traffic with btop on localhost, the total transfer was 467 kB after roughly 50 requests, so compression is definitely enabled for me.