I don’t believe there is anyting stopping you from compressing manually before sending, and decompressing after receiving. The easiest way would be to use compressed_data = :erlang.term_to_binary(yourdata, compressed: 9) to compress and :erlang.binary_to_term(compressed_data) to decompress.
Otherwise, there also is the :zip module, but I believe it works with files rather than inline data, which will require more boilerplate involving creating and manipulating temporary files.
Well I guess I can do that, but I’d like to have a way to compress and send the right headers to the browser, so it does the decompression automatically (same way it does with regular HTTP + the right header).
However architecturally speaking it might make more sense to keep the PubSub messages light and simply send notifications that new data is available (possibly including the URL) and leave the heavy lifting to the standard request/response infrastructure (where HTTP caching could be employed if necessary).
I don’t see how just sending a notification and then fetching the data is better than directly sending all the information that the frontend needs. In terms of design it’s simple, it does not need to make another whole roundtrip, it does not need to hit the DB again, there is no need for additional caching, etc.
Actually, so far this design has been a blast for our application, and not even very frequently sending 2MB of data is affecting performance too much. Actually, sending the same data over HTTP uncompressed takes a lot longer, I guess because of TCP handshakes and roundtrips which you don’t have with websockets, once the connection is established.
A nice thing about compressing all the data of a large pubsub message to a binary is that binaries can be shared across processes unlike everything else but atoms, so you can get a substantial speed boost as well if broadcasting to a LOT of processes. ^.^
Cowboy does seem to support permessage-deflate on sockets - but as Phoenix channels are meant to be transport agnostic, selective message compression probably wasn’t for the time being a high enough value feature to justify the effort.
It boggles my mind though that the browser web api doesn’t expose the built-in compression functionality.