Is it possible to enforce that there a given user can have at most one instance of a Phoenix websocket open at a time?

So I’m working on a small game and I want to enforce that each logged in user can only have one websocket instance open at a time. I’d prefer to do the check on the socket-level instead of the channel-level so that I don’t have to repeat the check on every channel.

So far I’ve tried to create a new unique Registry and register in the connect/3 callback but that doesn’t work because the connect/3 callback is not called from the user socket process (by that I mean that it is not called with from the transport_pid process, I am actually not sure where it is called from, but I do know that the process it is called from is not persistent so the registry doesn’t keep the registration).

Any ideas on how to implement said check?

1 Like

I would also look for the connect/3 of UserSocket to implement this check…

I have been trying to use Registry at first, but as You mentionned Registry is not the good solution.

So I tried my own…

defmodule Api3d.Demo do
  use GenServer

  def start_link(_args), do: GenServer.start_link(__MODULE__, nil, name: __MODULE__)
  def get_state(), do: GenServer.call(__MODULE__, :get_state)
  def register(id, value), do: GenServer.call(__MODULE__, {:register, id, value})

  def init(_args) do
    {:ok, %{}}
  end

  def handle_call(:get_state, _from, state), do: {:reply, state, state}

  def handle_call({:register, id, value}, _from, state) do
    state = state |> Map.put(id, value)
    {:reply, state, state}
  end
end

And in the socket

  def connect(_params, socket, _connect_info) do
    user_id = 1

    uuid = UUID.uuid4()
    socket = assign(socket, :uuid, uuid)

    IO.puts "---->"
    IO.inspect Api3d.Demo.get_state
    Api3d.Demo.register(user_id, uuid)
    IO.inspect Api3d.Demo.get_state

    {:ok, socket}
  end

And here it’s ok, because sockets have the good state. (Each new connection sees the previous)

So I would roll my own Registry GenServer (with trap exit, and DOWN catching etc) I would probably use an ETS table instead of a Map, and monitor sockets (to catch DOWN message)

to ensure only one user_id can connect.

1 Like

Hmmm, how would you detect that the UserSocket process went down so you can evict that key from the custom Registry?

I monitor process, so that I get a DOWN message when channel/socket goes down…

Here is an example of a channel monitor I am using…

 def monitor_channel(pid, channel_info), do: GenServer.cast(@name, {:monitor, pid, channel_info})

  @impl GenServer
  def init(args) do
    Process.flag(:trap_exit, true)
    {:ok, args}
  end

  @impl GenServer
  def handle_cast({:monitor, pid, channel_info}, state) do
    Logger.debug fn -> "Receive channel info : #{inspect channel_info}" end
    Process.monitor(pid)
    state = Map.put(state, pid, channel_info)
    {:noreply, state}
  end

  @impl GenServer
  def handle_info({:DOWN, _ref, :process, pid, status}, state) do
    channel_info = Map.get(state, pid)
    Logger.debug fn -> "DOWN catched! #{inspect channel_info} #{inspect status}" end

    # Cleanup when channel dies
    if channel_info do
      {id, uuid} = channel_info

      id
      |> String.to_integer()
      |> World.get_worker()
      |> World.leave(uuid)

      notify(%{type: :game_left, payload: %{id: id, uuid: uuid}})
    end

    state = Map.delete(state, pid)

    {:noreply, state}
  end

and in channel join…

ChannelMonitor.monitor_channel(self(), {id, uuid})

I don’t need to rely on terminate callback to clean up things, this is managed outside the dying process by a specific genserver.

This should be the same when monitoring socket.

Do note that all of these enforce one connection per server, not one connection period. One connection period is a lot trickier if you’re running more than one server.

1 Like

Yes, but that would be the case too when using Registry. This is just a simple case with no distribution in mind.

BTW Given that Phoenix.Presence is able to have a distributed count of users… I would check if it is able to do this for me (ensure only one is connected with a given id)

Also, I would not be glad if I open a browser somewhere and forgot to close it… it will block any connections until closed.

But isn’t the channel process a different process than the socket process? My goal is to monitor the socket process cleanly, although it’s starting to feel like it isn’t worth the effort and @benwilson512 brings up a good point that once you add multiple servers it becomes much more complicated (although I don’t envision that being a real issue for this particular project, but it makes it a bit less interesting to investigate).

1 Like

I just put code as an example… But this would be the same for a socket, I would just start monitoring at connect/3 instead of channel join.

I too don’t like to make this kind of check, mainly because an open browser can block access. I prefer to check that one player cannot connect twice to the same game

Monitoring in connect/3 doesn’t work because it is not called from a persistent process the monitor would receive DOWN immediately.

Anyway for now I’m registering based in the channel instead of the socket. It’ll work well enough for my tech demo, it’s really meant to just be a lightweight workaround for not requiring a password with an account. In that respect it is similar to IRC.

1 Like