I have one Dynamic Supervisor with 3 Gen Server actors (call them workers), and in parallel a Load Balancer actor. The Load Balancer actor receives a stream of data and sends in round robin fashion the data to be manipulated by the workers.
Now comes the problem. Speculative execution (at least the version I try to implement) states that given a task to 2 different actors, one of them will execute it faster and the other one should be stopped from finishing it (mid-process) and told to move on to the next task.
Solution 1: kill it with Process.exit(), the worker will be respawned and continue. But the further messages in the mailbox will be lost.
Solution 2: Part the process in several steps and log the progress on shared resource (be it a database or using Agent). When trying to move to the next one, look up if the other worker has finished and stop if so. But this adds additional computation time and not every process can be parted into several steps. Smells like overhead much.
Solution 3: Use a special function that, once the Load Balancer receives the answer from an actor, will be called so the second actor stops the message execution (if started) or never execute it (if still in queue).
Has anyone any idea of such a special function for Solution 3? I’ve spent like 2 days reading the documentation but did not come close to such. Or maybe there is a method to create such function that will force the Gen Server to at least stop mid-process and consider the new high-priority message?
You cannot force another process from the outside to stop what it’s doing and read a message. You can only send it a message, which will queue like any other and needs to wait for the process to check its mailbox. So the best you could do is similar to your option 2, where the worker regularly checks its mailbox to see if its supposed to stop / drop work from the queue.
That was my first question too
It’s a learning task at a Real-Time Programming university course. We are simulating a case of real time processing of streams of data and in order to achieve faster results comes the Speculative Execution solution we should implement in this particular way. The situation is that having multiple resources with performance variation and a limited time the result should be received in, send the task to multiple actors, achieve the result in the fastest time possible and tell the other ones to drop it.
Oh I see, the problem with your approach is that you need to guarantee that all those processes will run on a different scheduler, since each scheduler runs on a separate core, otherwise there is absolutely no gain from this approach as running them on the same core will just execute them concurrently by switching the context.