How fast is

I am using to send messages from one node (Node A) to another (Node B). The messages must be sent in order, sort of like this: payload, fn load ->{Server,:somewhere@remote},load)
However, I want these calls to be very fast. I could create a Task for each call, and have Node A attach a timestamp (represented by ascending natural numbers) to each load so that Node B can sort the potentially out of order messages, like this. payload, fn load ->
Task.async(fn ->{Server,:somewhere@remote},{,load}) end)
|> fn task ->
Sorry for any formatting or syntax errors. This should start a process for each call to Node B, and the networking time is overlapped instead of stretched out. The problem with this approach is that now Node A must generate timestamps for messages when Node B is supposed to do so (the system uses star topology with Node B at the center acting as a message broker). I want to keep the design clean. Basically, is synchronous fast enough so that using Tasks for many calls is unnecessary? Also, unrelated to the main topic, is it okay for Node A to do the “timekeeping”? In this system Node A creates all new events through a rest api. Every other node (Node B included) processes these events.

  1. There is no guarantees on message arrival orders over the network. This is totally outside Elixir/Erlang/Scala/Go/C/Java/etc domain. A flaky router in the network can delay messages while another message which avoids this router arrive timely.

  2. The is as fast as the If the, do: very_slow_function() end

it would be impossible for the to be fast
All the means is the callee and GenServer process would block until it receives a result.

  1. Given you expect some ordering there are 3 possible distinct ordering:
    a. Requests ordering
    b. Processing ordering
    c. Result ordering

These are independent, it is possible to require processing ordering but not request and result ordering. Are you certain you require this ordering to begin with.

  1. In general Node A can do the “timekeeping” because it is the entry point into your system. It is possible that Node B is the actual entry point but that is something you have to decide, partially by your answer to (3).
1 Like

Messages between Node A process 1 and Node B process 2 arrive in order, but might be randomly interleaved by messages of any other process of Node A or Node C (which I suppose to exist, since only 2 nodes don’t make a “star topology”).

But even if order of messages between processes is guaranteed, try as best as you can to remove your dependency on the message.

Often we want ordering where its not actually necessary, and to be honest, the business people which required me to deliver messages in order, didn’t even realize, that I did not. And all I do is holding back cancel messages until I’ve seen the message that gets canceled :wink:

1 Like

Several factors which make a definitive answer impossible.

  1. How much processing is happening inside the GenServer. If it’s substantial then all other optimizations are moot
  2. Is the processing acting on GenServer state? (meaning it’s needs to sequential)
  3. How are Node A & B connected
  4. How large is payload etc

If you are worried about the context switch & overhead of many genserver calls, you can send chunks or even the whole payload.

Well, in general, I do understand this as “how much overhead does add?”, and I tend to say, usually it adds less overhead than implementing something from scratch to push data from one node to the other.

Available Bandwith to send the data, CPU time to process the data and sending the reply needs to happen anyway, regardless of a GenServer is used or something else.