Force reconnect all sockets from a node

We have a multi-node phoenix setup for PubSub using Kubernetes. (phoenix v1.3.0, phoenix_pubsub v1.0.2).

Often stopping a node produces an error:

#PID<0.20353.67> running TransporterWeb.PubSubEndpoint terminated
Server: [retracted]:80 (http)
Request: HEAD /https://[retracted]/
** (exit) an exception was raised:
    ** (Plug.Conn.NotSentError) a response was neither set nor sent from the connection
        (plug) lib/plug/adapters/cowboy/handler.ex:42: Plug.Adapters.Cowboy.Handler.maybe_send/2
        (plug) lib/plug/adapters/cowboy/handler.ex:16: Plug.Adapters.Cowboy.Handler.upgrade/4
        (cowboy) /app/deps/cowboy/src/cowboy_protocol.erl:442: :cowboy_protocol.execute/4

This itself doesn’t seem to be harmful but it does generate noise in our logs. I think it happens because there are requests that are not completed. I’d like to drain the node before stopping it. For HTTP endpoint I’m already using

Is there a way I can force a reconnect for all clients in a particular node?

EDIT: The error also sometimes happens even without needing to stop the instance. I currently do not have a way to easily reproduce it. With ~10K connections (mostly websocket, few hundred longpoll) it happens about 10-50 times a day.

Send a “disconnect” message to all of them?

MyApp.Endpoint.broadcast("users_socket:" <>, "disconnect", %{})

Or any other message which would be handled on the client by reconnecting.

That sends a message to the user on all nodes. I want to to partition by node. So if User 1 has two sockets open, one with Node A, one with Node B and I want to stop Node A then I want to disconnects sockets connected only to Node A, not both.

Have you tried Phoenix.PubSub.direct_broadcast/4?

Will look into it. Thanks.

I also noticed that the error is a bit unrelated. It happens when bots are requesting endpoints that are not exposed from the Endpoint (we have a custom Endpoint). Added a Plug to return 404 in those cases which should help with that.