We have a multi-node phoenix setup for PubSub using Kubernetes. (phoenix v1.3.0, phoenix_pubsub v1.0.2).
Often stopping a node produces an error:
#PID<0.20353.67> running TransporterWeb.PubSubEndpoint terminated
Server: [retracted]:80 (http)
Request: HEAD /https://[retracted]/
** (exit) an exception was raised:
** (Plug.Conn.NotSentError) a response was neither set nor sent from the connection
(plug) lib/plug/adapters/cowboy/handler.ex:42: Plug.Adapters.Cowboy.Handler.maybe_send/2
(plug) lib/plug/adapters/cowboy/handler.ex:16: Plug.Adapters.Cowboy.Handler.upgrade/4
(cowboy) /app/deps/cowboy/src/cowboy_protocol.erl:442: :cowboy_protocol.execute/4
This itself doesn’t seem to be harmful but it does generate noise in our logs. I think it happens because there are requests that are not completed. I’d like to drain the node before stopping it. For HTTP endpoint I’m already using https://github.com/Financial-Times/k8s_traffic_plug
Is there a way I can force a reconnect for all clients in a particular node?
EDIT: The error also sometimes happens even without needing to stop the instance. I currently do not have a way to easily reproduce it. With ~10K connections (mostly websocket, few hundred longpoll) it happens about 10-50 times a day.