OTP architecture advice

Hi there!
Which approach would you choose for the following:
App receives events notification from external source. As long as notification received, app needs to collect data from external API. After data from API received the call to API has to be repeated after 10 sec. And repeated again after 20 sec. An again after 40 sec and so on. In other words each notification should trigger N calls each call should be made after 10 x 2^n seconds.
Notifications occurs randomly with random rate. If at some point of time API call result in stop command no further calls has to be done.

What would you use and how would you structure an application? Supervised Tasks, GenStage, GenServer, would you spawn a new process for each call or would you track timings in single process? Could you provide architecture example?

Thanks!

As usual, I think it depends. If I’m understanding the problem correctly though, what I might start with is a DynamicSupervisor for the notification handler that spawns a new GenServer for each notification. Each GenServer could then make its API call and use Process.send_after to queue up the next call in x seconds. If it receives a stop, that one GenServer can terminate without affecting the others.

Here are some questions that probably require an answer before a choice can be made:

  • What should happen on app restart or (partial) crash? Is it required to continue the sending of these requests at that case, or is it okay to drop them in cases like these?

  • How expensive will these API calls be? Will there be other work that has to be calculated locally to send it to the API? How much storage will be required to keep track of the notification-data that will be used in some way to invoke the API? (What kind of data are we talking about?)

  • How tight is the time window? Is it a problem if at some point a call is a second late?

  • What should the app do if the API is not responding or crashing when you call it?

  • How will the app receive these notifications? By an external party making a request, or does it have to fetch the data itself?

  • What kind of parties will create these notifications? Will there only be a couple hundreds per minute at best, or possibly millions?

2 Likes

I’ve been thinking same way, except spawning Tasks, which would init with delay arg, sleep, send message through Erlang port, await message using receive function and spawn another Task with new delay

Maybe a simple GenServer can do this… it can send a message to itself based on incremental time.

defmodule Demo.Worker do
  use GenServer
  alias __MODULE__
  @period 10_000
  defstruct k: 1

  def start_link(), do: GenServer.start_link(__MODULE__, nil)
  def stop(pid), do: GenServer.cast(pid, :stop)

  @impl GenServer
  def init(_args) do
    send(self(), :tick)
    {:ok, %Worker{}}
  end

  @impl GenServer
  def handle_cast(:stop, state), do: {:stop, :normal, state}

  @impl GenServer
  def handle_info(:tick, %{k: k} = state) do
    IO.puts "call API... #{k}"

    Process.send_after(self(), :tick, k * @period)
    {:noreply, %{state | k: k + 1}}
  end
end

For simplicity it is OK to drop all work in progress and start with a blank state, awaiting incoming notifications.

Well, the data received from API should be processed and results should be used to determine should there be another messages to API ot not, however nothing too expensive.

Basically, notification exposes new device ID and requests to API are made using this ID to get sensor reading (just an integer). Received data together with previous readings are then used to make a decision: send another request later on with increased time delay, stop further request and dispose device readings or make final request and also dispose data.

Not a problem at all. Such delays wont make a lot of difference and not something I would care of.

Timeout, then repeat attempt again with next time interval. API plays leading role here.

It will use Port to external C lib which is capable to receive additional payload with request and send it back unchanged together with result. I plan to use GenServer as a boundary to this API which would pass PID of the caller to API and once response received send it via message to initial caller. Does that makes sense?

Yup not more than a 100 per minute at peak and 2-5 at average , however each ID received in notification would live for up to 10 hours. Which means that there are approximately 3000-5000 devices scheduled for API calls. Not that much.

Yup, thanks, for detailed example :+1:

Based on the responses you gave to the answers, I think I would build a bunch of GenServers that send themselves timed messages similar to what @kokolegorille proposed. GenStage is less helpful here because you are not working with a sequence of data transformations (and want to send multiple messages at timed intervals based on a single input).

I’d suggest starting one GenServer per incoming notification (or maybe one per sensor). This allows them to fail/succeed independently from one another.

Calls to the API I would do through a circuit breaker to ensure that the API is not swamped when it has problems.

I don’t think a more complicated architecture is necessary for this situation :slight_smile:.