The right way to design an API that starts processes "under the hood"

I have a shared library function used in several places in my code that starts a process, and I’d like som input on the right way of implementing that for convenience and robustness.

To provide background to the question, I have some code running on Nerves on a Xilinx Zynq Ultrascale+ (Quad core ARM in same package as a large FPGA). This code needs to act on a number of interrupts provided by the programmable logic (FPGA), which are provided to Linux userspace through generic-uio. Basically you get a file that you attempt to read, and it blocks. When the interrupt goes off, the file reads out 4 bytes, and then you try read again and it blocks until the next one. In Elixir this is joyously easy to implement - a process that sits in an infinite loop attempting to read. When it succeeds it calls a fn that was passed in initialisation (you would expect the fn to be generating a message to the process that cares). I’ve got that process wrapped in a Task to make it OTP compatible (for supervision trees etc.)

I have several of these interrupts that relate to specific peripherals, and so I need to set it up for each peripheral. I can see a few different ways of doing this but I’m unclear which (if any) of them is the right option to both get the right error handling in case of process crash and the right ‘ergonomics’:

First option - just do start_link in the init method of the peripheral GenServer. This gives the best ergonomics for the user, and is the easiest to implement. The interrupt process is linked to the GenServerso if either go down they’ll both go down - that feels like the behaviour that I want, but is it ok that it isn’t under a supervisor directly?

  def init(_args) do
       fn _ ->, :notify_tx_interrupt)

Second option - explicitly start each interrupt process under one of my supervisors. This is particularly easy in my case because I never need to address the process (I don’t need the pid) it just calls me. It would be more complicated if it was a GenServer I needed to call.

Third option - have an ‘interrupts’ supervisor, and when I ask for an interrupt it starts the process under that tree. It would then also have to link/monitor back to the caller so that it gets restarted under the right circumstances. This is probably the approach I would take if the interrupt code was in an application of its own (i.e. it was in a hex package designed to be used as a dependency). It feels like a lot of code to manage and get right for something relatively simple.

You probably want option 3, but Task.Supervisor gives you a baked-in way to launch a child linked back to the caller process, no?

1 Like

Interesting. I assume you’re referring to Task.async (Task — Elixir v1.12.3)? I hadn’t thought of using that for a long running process and just not awaiting on it.

Maybe a DynamicSupervisor fits the requirements?

No, I mean Task.Supervisor.start_child

Edit:. Huh, it seems like it doesn’t automatically do a one way link… Til.

1 Like

Task.Supervisor.start_child doesn’t link to the caller no, only to the supervisor. However in practice that usually doesn’t matter since if you care about linking and the results you’d use Task.Supervisor.async/3 anyway, which will link the task to the caller.

1 Like

I can see a one-way linkage in a supervised task useful… Like in this case, but there are also lots of times where I have launched a task that, say, reads the db, and in test there’s a race condition where the task can be detached from the db checkout; would be nice if you could get a supervised task that dies when it’s caller does without being a full-on async.