How do I reduce RAM usage with Phoenix LiveView socket?

Hey all,

I’m wondering how to reduce some of the RAM usage (and maybe some tips for diagnosing which processes and function calls are actually hogging RAM) in my LiveView application. My use case is that I have a set of geographical features stored in Postgres with the great geo_postgis library. These features are broken up into sectors, each of which are defined by a rectangular polygon. When the user moves the map on the webpage, I have a hook send an event to the LiveView process with the bounding box of the map area in view. Then, the event handler calculates which sectors are in view and gets the features by their associated sector ID from the in-memory Cachex cache (this obviously is a large portion of RAM usage, but the total number of features is around 20,000 and is around 17MB of CSV file total). The event handler assigns prepends the sector IDs to the :loaded_sector_ids list assign in the socket (so it doesn’t send them multiple times), and sends the data to the client with push_event/3 using Enum.reduce with the socket. The code is below:

  def handle_event("load-data", %{"bounds" => bounds}, socket) do
    socket =
      |> MyApp.get_intersecting_features() # loads data in [{sector_id, sector_data}, ...] format
      |> Enum.reject(fn {sector_id, _sector_data} -> sector_id in socket.assigns.loaded_sector_ids end)
      |> Enum.reduce(socket, fn {sector_id, sector_data}, s ->
        |> push_event("data", %{sector_id: sector_id, data: sector_data})
        |> update(:loaded_sector_ids, fn sector_ids -> [sector_id | sector_ids] end)

    {:noreply, socket}

My guess is that the heavy memory usage is caused by the large immutable lists of features are being copied to the socket in the Enum.reduce call, but I’m not sure how to structure this in a better way to reduce that. Would it make sense to run this in more of a recursion manner? On receiving the bounds, I could load the list of sector_ids that need to be sent to the client, then basically send a message to the current process with the remaining list of sector_ids to load and send to the client until that list is empty?

Looking forward to hearing suggestions, and thanks in advance!

1 Like

Are you looking to reduce max RAM usage, or reduce long-term RAM usage? Asked another way, is your current problem that after the push_event occurs, you’re still seeing high RAM usage? Even though the data isn’t in memory?

If looking to reduce max RAM, then I’m not sure what you could do besides possibly streaming the data from server->client and never allowing large chunks to be passed around.

If seeing high RAM usage after the push_event occurs, then you could try a very rough :erlang.garbage_collect() to see if it solves your problem. This is the least elegant way to handle it, but it’s useful for diagnostics. If you see that helps with memory usage, then you could evaluate spawning a process or Task to grab/push the data, or you could leverage a more aggressive garbage collection threshold so that it runs GC more often.

I wrote a (now very old) blog post about memory usage with WebSocket / Channels. A good bit of this is now obsolete due to process hibernation, but it talks about diagnosing and a few various solutions.

1 Like

send(socket.transport_pid, :garbage_collect)

:fullsweep_after might be of some help needs phoenix >= 1.6.3 and Erlang/OTP 24 .


Yea, this is a good point. There are 2 processes here, the transport and the LiveView Channel. You would want to try forcing GC to happen in both if you wanted to fully ensure cleanup. The link you provided above is specifically for the transport.

1 Like

Hi @kartheek and @sb8244, thanks for the quick replies!

I tried adding send(socket.transport_pid, :garbage_collect) to the event handler last night, and it has reduced some of the RAM usage (down about ~30%). I was also looking into the :fullsweep_after option, but it’s not obvious to me where I can set this option for this LiveView? A quick pointer would be appreciated :smiley:

Because max RAM usage is a slight concern, I will also look into streaming methods. Will update with results after doing a comparison, hopefully later this week.

Many thanks for your suggestions!

1 Like

Ah I think I found it - in my endpoint.ex file, I added the fullsweep_after: 0 option. Does this look right?

socket "/live", Phoenix.LiveView.Socket, websocket: [connect_info: [session: @session_options], fullsweep_after: 0]



I typically set fullsweep_after at the VM level because it seems like a universally good thing in the environments I’ve deployed to. I do that by adding -env ERL_FULLSWEEP_AFTER 20 to my vm.args file. This would apply to ALL processes in your VM, such as the transport process and the Channel process.

However, setting it at the WebSocket level like you’ve done is completely valid and a good place to start. I’ve found > 0 is a good idea, otherwise you will be running fullsweep after every single message.