Phoenix Presence across LiveViews

On the app I’m working on, I need a list of connected users on all screens. Simple enough, I call Presence.track on a on_mount hook every LiveView uses, and then retrieve the list of users with Presence.list.

Everything works great except…

Every time I switch LiveView, a presence_diff is triggered, which is understandable, but not desirable for my usecase. It’s not only wasteful, but sometimes the user list “flickers” adding and removing users, depending on how fast the unmount/mount is.

My question is then, is there a library that wraps Phoenix.Presence and has some sort of sensible timeout to declare a user as “untracked”, or even better, am I doing something wrong and this can be solved with vanilla Phoenix?

Hmm, that’s an interesting question. Just to clarify, by “every time I switch LiveView”, is that a push_navigate or JS.navigate right?

I’m not aware of any built in way to smooth/optimize presence diffs by setting a window where a leave event followed soon after by a join event for the same id would cancel each other out. That would be pretty neat, but would I imagine likely require closer coupling of Phoenix.Presence and Phoenix.LiveView.

Off the top of my head, things to consider…

  1. Phoenix.LiveView offers an optional terminate/2 lifecycle callback so it should be possible to inspect the socket before it closes to see if it was annotated with a navigation to another LiveView.
  2. If so, notify the Presence Channel/GenServer to expect this id to leave but join back soon after and store that navigation information.
  3. Phoenix.Presence offers an optional handle_metas/4 callback that separates out the join and leave diffs that can then be updated in light of the navigation information sent by LiveView.
1 Like

I’ll stay tuned for any libraries, but here’s some ideas too.

Presence uses Tracker under the hood to track the PID of any arbitrary process on a defined PubSub topic. If you’re using track/3, it will take the channel_pid and the topic from the socket. If you use track/4, however, you can pass any arbitrary PID or topic. The behavior you’re noticing is a channel connection closing when crossing live_session boundaries, which terminates the LiveView process, and then Tracker does its thing because you’ve used track/3 (with socket) or track/4 (with self()).

A solution might be to spawn a process that can exist independently of the channel’s process with a sensible expiration time. I don’t know if there are reasons not to (probably, and I’ll look to others for the pitfalls), but a naive implementation might look like using a DynamicSupervisor to create one lightweight “ping” GenServer process per active user. The GenServer could have both a timeout whereby it terminates after some arbitrary period of inactivity and an explicit way to gently kill the process immediately.

Your LiveVIew mount callback (or an after join event handler) could find or create the applicable GenServer and store its PID on the socket. It would then call track/4 using the GenServer’s PID. Except for this wrinkle, it would most likely be almost indistinguishable from how you have your code currently structured because you’re subscribing to and handling the messages from the PubSub as normal.

Have I done this before? Not at all. It’s just what comes to mind immediately as a solution, so I’ll be interested to hear other responses too.

1 Like

Thanks @codeanpeace and @imkleats for your suggestions, they really helped me :wink:

I’m still working out the details, but the path I’m moving forward with, is implementing a custom tracker.

defmodule MyTracker do

  def handle_diff(diff, state) do
    state =
      if Map.has_key?(state, :leave_tasks),
        do: state,
        else: Map.put(state, :leave_tasks, %{})

    state =
      Enum.reduce(diff, state, fn {topic, {joins, leaves}}, state ->
        state =
          Enum.reduce(joins, state, fn {user_id, meta}, state ->
            {task, state} = pop_in(state.leave_tasks, [user_id])

            if is_nil(task) do
              event = %Events.UserJoined{user_data: meta}
              PubSub.pub(topic, event)
            else
              Log.info("Kill leave task for user_id: #{user_id}")
              Task.shutdown(task, :brutal_kill)
            end

            state
          end)

        Enum.reduce(leaves, state, fn {user_id, meta}, state ->
          event = %Events.UserLeft{user_data: meta}

          if root_presence() do # I have more than one scope of presence. True means is "root" presence.
            task = Task.async(fn -> broadcast_user_left(topic, event) end)
            put_in(state, [:leave_tasks, user_id], task)
          else
            broadcast_user_left(topic, event)
            state
          end
        end)
      end)

    {:ok, state}
  end

  def broadcast_user_left(topic, event) do
    Process.sleep(3000)
    PubSub.pub(topic, event)
  end

end

This is solving the flickering issue, but it added a new one I’m too tired to tackle today: When you try to move out of a live view and you have any pending leave tasks, the user needs to wait for the task to run (in this case 3 seconds :P).

I need to somehow capture LiveView’s shutdown event and send that to the tracker. Is that possible? A “before_shutdown” hook?

If OP is not crossing live_session boundaries, maybe there’s a way to identify current_user and attach Presence to transport_pid instead of doing it in every LiveView. Something like this (haven’t tested it myself):

In lib/myapp_web/endpont.ex replace default transport implementation with your own module

socket "/live", MyAppWeb.UserSocket, websocket: [connect_info: [session: @session_options]]

In this module, grab user_id and attach Phoenix.Presence to transport_pid.

defmodule MyAppWeb.UserSocket do
  use Phoenix.LiveView.Socket

  @impl true
  def connect(params, socket, _connect_info) do
    current_user = get_user!(params)

    {:ok, _} =
      Presence.track(self(), "live", current_user.id, %{
        online_at: inspect(System.system_time(:second))
      })

    {:ok, socket}
  end
  @impl true
  def id(_socket), do: nil

  defp get_user!(params) do
    # implementation
  end
end
3 Likes

I‘d suggest going with a plain old channel connection to track instead of tracking LV processes. That one will stay open across live navigation.

1 Like

You can have multiple LiveViews on the same page.

Personally I use that to have one sticky LiveView that can display (Bootstrap) toasts while navigating inside the “main” LiveView.

Maybe you could do Presence in the sticky one.

Thanks

Thanks! I’m gonna try this out :wink:

Interesting, how can you do that? How does your routing work? Could you elaborate on this idea?

I went digging because I didn’t remember the details :slight_smile:

1/ I use live_render/3 to render my “sticky” Liveview in my layout above the usual @inner_content

<%= live_render(@conn, GreatExampleWeb.StickyLive) %>
<%= @inner_content %>

2/ In this LiveView I use the layout option in mount/3

    {:ok, socket, layout: {GreatExampleWeb.Layouts, :sticky}}

3/ I have this layout sticky.html.heex

<%= @inner_content %>

Voilà. I don’t think there’s more to it. There’s no routing or anything, the other LiveView acts as before.

You could just do layout: false in mount.

This should be live_render(@conn, GreatExampleWeb.StickyLive, sticky: true)

You could just do layout: false in mount.

Ahah thanks I was just asking myself that upon rereading my code.

This should be live_render(@conn, GreatExampleWeb.StickyLive, sticky: true)

So it seems sticky: true is helpful if the Liveview is nested.I guess I didn’t considered it was possible. Is there any advantage to it?

awesome, I got a lot of stuff to play with today :slight_smile:

I ended up using a custom Socket (as you suggested), but also calling Tracker directly and not using Presence.

I didn’t use presence, since I don’t need to receive the changes in the socket process, only in LiveView. So I had to track the user in Socket and then subscribe to events in LiveView.

Thanks to everyone!

1 Like