This can be used in various ways, but one is that the same GenServer can be used to block multiple processes on handle_call by not returning a reply (i.e. returning {:noreply, state} from handle_call). These processes then block, waiting for their replies that you can later formulate and send with GenServer.reply/2. You need to remember the from ref of the client that’s waiting on the call, however.
Yep, that would make it better indeed. Combined with good observability (OpenTelemetry + HoneyComb or OpenObserve) it would make troubleshooting and finding bottlenecks relatively easy.
Wasn’t aware that you can actually choose to respond later to a GenServer call, interesting, thanks for that tidbit (you + @D4no0 + @zachallaun). Definitely overlooked that.
Maybe there’s a place for something in-between Elixir’s OTP helpers and Oban. Your scenario sounds like a good candidate.
This is a very common technique and changes how you think about concurrency in Elixir. It performs very similar function as promises in other languages.
If you look at GitHub for GenServer.reply you will find examples from all around ecosystem, including the top projects like LiveView, Phoenix, Ecto, varions HTTP clients, database connectors etc. etc. this technique is all over the place.
What’s super cool about it is that the process that responds to the call doesn’t have to be the GenServer process at all.
A GenServer can be a “dispatcher” of calls to other processes, and they can reply directly to the waiting, caller process.
You can improve this by starting the Tasks under DynamicSupervisor, with restart: :temporary, and monitor the Task’s pid instead of linking to get notified when the process finishes or crashes (and what was the exit value/reason).
Further, if you need retries, you can use a different restart strategy on the supervisor and introduce an additional process, whose job is to start and monitor the Tasks, and provide responses in case these finally crash after N restarts and the job can’t be completed.
Then, if you want to limit the concurrency, you can introduce max_children for the DynamicSupervisor, and then queue up the calls somewhere (the new process from above paragraph can do it) if they’re coming in and the capacity is exceeded, and you just built yourself an in-memory queue system with a pool of N workers, retries and error handling.
So in my example, the call chain is Caller -> DelayedServer -> Task. The Task is linked to DelayedServer, so if it dies, the DelayedServer dies, and Caller is unblocked.
If we introduce the DynamicSupervisor, will the call chain look like this:
Caller -> DelayedServer -> new DynamicSupervisor -> Task