Time Sensitive Functionality in Elixir

I need to understand more clearly how the runtime & scheduler works in regards to time sensitive functionality.

  1. I would like to measure the response time of an HTTP Request to an external server. I want this measurement to be the exact time it took for the external server to respond. I can record the time it takes for the function to finish, but if the CPU is busy, etc this might not be the true response time. What is the best way to do this and why?

  2. I want to run repeat cron-like tasks on a timer. How confident can I be that the task will run at the exact given time? If I overload it - say by scheduling 10,000 tasks at the same time - what will happen? Will they be scheduled in order? And can I retrieve the actual execution time?

In general I am still trying to understand when Asynchronous code should be a Process / Stream / GenServer / Task / Agent, and any guidance / resources here would be great!

This is a Phoenix project.

Thanks!

1 Like

Looks like you don’t want to overload your server then. You can limit the concurrency with something like GenStage, custom dynamic task supervisor or Poolboy. If you do overload your server, you not only get the wrong measurements, but your whole system will be, well, overloaded, slow and less responsive.

Having said that, Elixir/Phoenix does scale well when it comes to handling many concurrent I/O tasks like making HTTP requests. So it might simply not get overloaded. But if it does, limiting concurrency would be my first task to look at.

https://www.youtube.com/watch?v=_Pwlvy3zz9M may be of use. It goes through some of the core characteristics of the BEAM runtime.

The short answer would be: You can’t. True real-time programming is something that is only (‘somewhat’) guaranteed when you are moving very close to the hardware, and run compiled code directly on the metal.

As soon as you have an operating or other type of scheduling system (which the BEAM on Xen would also be), then you are building a system that is able to respond to multiple, possibly conflicting requests and demands at the same time. For nearly all systems, this is good enough, and when you have multiple users that interact with the same system, this is usually better than having a ‘true real-time’ (but probably sequential/slow). Of course, this is not true for all.

What are your requirements?

This will likely be another unsatisfying answer for you since as @Qqwy notes the BEAM provides soft real time guarantees with low latency using a preemptive scheduler.

It feels like your questions are a little orthogonal (to me) since:

  1. The response time from the remote service will likely have material variability due to network latency and remote server load and therefore the response time you want to measure accurately will be very volatile - therefore basing your process design decisions on an absolute performance measure would seem problematic and potentially misleading.

  2. I don’t understand your correlation of remote server response time with which process abstraction to use. The good thing is that GenServer, Task, Agent, Stream are all just abstractions on a Process. Nothing more. So if you understand how a process works (message queues being serialised, how the receive loop works and so on) then you understand the basis of all the abstractions. Then pick the right abstraction that matches what you’re trying to achieve. An Agent to store stage, GenStage if you have a data transformation pipeline, Task if you’re after an “ad hoc” asynchronous function and so on.

If you’ve been patient enough to get this far then maybe a couple of other things will help clarify multi-process concurrency:

  1. A process is the unit of concurrency (as you know)

  2. Creating processes is very cheap (compared to most runtimes) in both memory and time

  3. Messages sent to a process are queued in a mailbox in memory. A process will process messages in order, one at a time. There is a mechanism called selective receive which can allow receiving messages out of order based upon pattern matching. It requires care to not mess up.

  4. Time based scheduling of requests it typically done by using Process.send_after/3 which will deliver a message to a given process after an elapsed period of milliseconds. So at best the scheduling resolution is milliseconds. In practise you would not expect a guaranteed receipt of the message in the process at exactly that time.

  5. Scheduling 10,000 processes to run at the “same time” requires creating 10,000 processes. That in and of itself is not an issue and it will be “quick” to create the processes. But you don’t have much practical control on how and when those processes are scheduled within the BEAM. Then you have to decide if you’re after higher throughput or better response time. Typically the BEAM is best when focused on throughput. In my limited experience with IO-oriented apps like a web app throughput is at its max when the number of processes spun up is somewhere around 2-times to 5-times the number of cores on your target machine. I have no doubt there are many more experienced people here who can be more definitive and scientific in their recommendations.

As long as they are not all running and only a few are, generally fine then. :slight_smile: