Question about the "Elixir way" and needing to guarantee work

What would be the most Elixir way to handle the need to ensure processing of something across processes?

An example that is non Elixir: web request comes in -> create object in amazon sqs -> poll sqs for workers -> send ack on completion of queue

This guarantees that the item was processed off a queue and can handle entire server failure. This is potentially slow though due to the hops it takes to get done. Is there a way to do this in Elixir that is just as resilient but also faster since there is less hops?

1 Like

Most systems that wait for work and then try to distribute it evenly that are built in Elixir or Erlang work with some kind of dispatcher:

  • work is sent to the dispatcher from the outside world.
  • the dispatcher passes this work off to the first worker it can find that is available.

Time spent inside the dispatcher is thus extremely short, allowing for good paralellism.

In the case of e. g. the Cowboy web server, a new process is started for each web request that comes in.

In cases like e. g. Ecto’s database connection layer, there is a fixed amount of workers as the DB only allows a few concurrent connections. One of these is picked and receives the work.

I believe this is pretty much the ‘Elixir way’ of handling this.

By the way, the new GenFlow modules make the creation of these kind of systems a whole lot easier.

1 Like