I am writing a channel that receives certain events from mobile applications.
When an event is received, it has to be stored in a data store, and published to a message bus:
def handle_in(message, payload, socket) do
event = build_event(message, payload, socket)
store(event)
publish(event)
# To be able to test the side-effects.
{:reply, :ok, socket}
end
Those two operations are independent of each other, so I want to run them in parallel. At the same time, I do not want the WebSocket to crash if any of them crashes (-> nolink). Also, the order in which these events are broadcasted has to be preserved, so handle_in can’t finish before these two operations have finished (-> await).
All in all, my tentative implementation is
def handle_in(message, payload, socket) do
event = build_event(message, payload, socket)
[
task(fn -> store(event) end),
task(fn -> publish(event) end)
] |> Enum.each(&Task.await/1)
# To be able to test the side-effects.
{:reply, :ok, socket}
end
def task(fun) do
Task.Supervisor.async_nolink(MyApp.TaskSupervisor, fun)
end
However, does that really need TaskSupervisor? Would Task.start be enough? Note that on shutdown, when the endpoint supervisor takes down its children, the await call will wait for these two anyway.
Can you justify Task.Supervisor in this use case?
), the WS processes them in order because Phoenix has one single process per user and topic, and Kafka partitions guarantee order too.





















