When using Phoenix.Pubsub.Subscribe and having a large number of subscribers to the same event (3k+), is there a single process pushing all those events out sequentially? If so, what would it take to distribute this to a pool of publisher processes which are assigned subscribers based on their hash so the process of fulfilling subscriptions is better distributed over the system cores? Has anyone done this before?
Ultimately we’re trying to significantly reduce latency at scale by allowing the scheduler to put multiple processors in action to satisfy a large number of subscriptions.
Sounds like something that can’t easily be done without altering the Phoenix pubsub code base.
It’s not a single process, but it’s a single process per node, which dispatches the message to all the subscribers of that node (implemented using Registry.dispatch
). Registry.dispatch
actually does have a parallel
option, which kicks in once registry stores registered processes on multiple ets table shards if enabled.
Phoenix PubSub does also support custom dispatchers though, which e.g. phoenix channels use to fastlane messages sent to external clients. Fastlaning in this case means encoding the message only once based on the serialization format and faning out only the encoded message directly to the sockets instead of faning out to channel processes, encoding messages individually and letting channel processes forward the encoded message to their sockets.
So you might be able to enable parallel sending by Registry
by configuring a custom dispatcher.
6 Likes