Background Task with autokilling itself and repeated action each N minutes

I want to create a bunch of background jobs dynamically. A job should run at most N minutes, strictly, and then kill itself.
It’ll check a state of some console application once a minute and if a state returns true, a job writes data into a database and kills itself ahead right away, otherwise it’ll keep checking for N minutes at most.

I’m looking for pointers of how to achieve that. I want a simple solution.
Should I use Task? If so, how should I specify the N minutes over the course of which it’ll be alive?
How can I make it to poll a state of an application once a minute?

1 Like

All fairly trivial with a GenServer, you can even send back yourself a message with a timeout (it only arrives if no other messages have arrived) or always in some-odd minutes regardless of any other messages (send_after). :slight_smile:

You can do it with Task by using send_after though, but a GenServer really sounds best for this task as it does not sound like a one-off.

3 Likes

For some yes, for others no. :slight_smile: We need to be careful with those sentences because they may discourage conversation. If something is trivial and someone doesn’t understand it, they may think it is their fault after all.

In any case, you are right, so thanks for pointing to the right direction. @washeku please see the following example as a starting point: http://stackoverflow.com/questions/32085258/how-to-run-some-code-every-few-hours-in-phoenix-framework

if it is not clear, please ask! I am sure @OvermindDL1, myself and others will be happy to help!

7 Likes

how should I specify the N minutes over the course of which it’ll be alive?
How can I make it to poll a state of an application once a minute?
how to make it terminate itself?

1 Like

If a GenServer I’d do a send_after to send a kill message to self() after whatever N minutes, along with a handle that when it receives that message then it dies. :slight_smile:

In a GenServer I’d either have it set an internal timeout as all callbacks in a GenServer can take (except terminate for obvious reasons ^.^) and just have it keep bouncing or use send_after again to send a ‘time to poll’ message, where you poll and then call send_after again based on the response. :slight_smile:

Based on what condition? In a GenServer you just return {:stop, :normal} to do a normal non-error terminate. :slight_smile:

1 Like

how to do that?

1 Like

With send_after, something like:

def init(args) do
  killSelfTime = args.death_minutes * 1000 * 60 # send_after wants milliseconds
  Process.send_after(self(), :kill_self, killSelfTime)
  state = encodeWhateverYouWantToDoHere(args)
  {:ok, state}
end

def handle_info(:kill_self, state) do
  cleanup()
  {:stop, :normal}
end

defp cleanup() do
  # If you need to do something else like cancel some pending function or un-register with
  # another server or something
end
3 Likes

I think handle_info callback should return {:stop, :normal, state}.

2 Likes

Possibly, I was just going from memory so that is very likely and makes sense. ^.^

2 Likes

thanks, I’ll try that out.

1 Like

on a status which returns some console application I’ve mentioned.

1 Like

And how can I start a GenServer dynamically triggered by, for example, a user of my webapplication clicking on a button on a page? a single GenServer for a single user. 5 users – 5 GenServers. and so on.

will “MyGenServer.start_link” start a new instance every time I call it?

1 Like

I’d put them in a simple one-for-one supervisor, as which point just call a function on the supervisor to start it, and in that function you just spawn the task using the same code as at that above link. :slight_smile:

Yes, but if you call start_link the tasks will be killed if the channel dies, which might actually be what you want, if so then don’t use the supervisor I mentioned above.

If you want to aggregate them together, so only one is setup for a given task via some ID of your choosing then a simple one-for-one supervisor is good, then many channels can share the same information (Phoenix.PubSub would be useful to send the information back for example). :slight_smile:

2 Likes

If you go with GenServer and :send_after callback, it is actually important that the background job does not run in a process of GenServer.

Because if you do the actual work inside GenServer, it won’t stop to receive the custom message you are sending with Process.send_after.

I think I’d go with a GenServer who’d be a “process killer”, and separate processes for each background job. In such case a newly created process would register itself to be killed by sending a message to GenServer. “Process killer” would send itself a message after X seconds, reminding itself to kill given process if it’s still running.

2 Likes

+1

Two processes would be my choice here as well. One does the job, another watches over the job runner and terminates it if it doesn’t finish in the given timeframe. That should properly take care of the case where the job is blocking for a long time (possibly forever).

Arguably the simplest solution could be based on a task:

poll_loop = Task.async(fn ->
  # simulation of a poll loop
  :timer.sleep(:rand.uniform(:timer.seconds(2)))
end)

# waiting for at most 1 second for the loop to finish
case Task.yield(poll_loop, :timer.seconds(1)) do
  {:ok, _} -> IO.puts "loop finished."
  nil ->
    IO.puts "timeout: killing the loop runner."
    Task.shutdown(poll_loop, :brutal_kill)
end

Probably the simplest way to implement the poll loop would be through recursion:

def poll_loop() do
  case poll() do
    :ok -> :ok
    :error ->
      :timer.sleep(:timer.seconds(1))
      poll_loop()
  end
end
3 Likes

I’ve done similar things in Erlang, and the same should be completely possible with Elixir… so, as @hubertlepicki notes you have your GenServer (or just any process, doesn’t have to be a GenServer … ) and then you have a function there, like:

defmodule Mod do
    def timed_job(job_data) do
        reciever = self()
        pid = spawn_link(fn -> send reciever, { :ok, job_data } end)
        receive do
            { :ok, response } -> 
                IO.puts("Got a response!")
                response
        after 1_000 ->
            IO.puts("Killing due to timeout.. sigh")
            Process.exit(pid, :kill)
            { :error, "Job timed out" }
        end
    end
end

Mod.timed_job(1)

This way you give your process a certain amount of time to do its work an return a message, otherwise it is killed (in the after clause) and an error is returned. You can see this in action with the above by changing the anonymous function to sleep for longer than the after clause timeout value before sending the message.

For added fun, you can make this clusterable by adding a node argument to the timed_job function’s params and using Node.spawn_link instead of just the vanilla spawn_link… then you can run your jobs on whatever node in your BEAM cluster you want.

I have used this in other projects to rather enjoyable effect :wink:

hth…

2 Likes

ah… and I suppose one can imagine various ways to get fancy with this: caller defined timeouts (so the caller, not the job itself, decides how long it has to run), multiple return values accepted by receive such as a :time_extension message which would allow the receiver to decide whether or not to give it more time or to kill it right there …

there is a small gotcha in my code above if that is run inside a process that is getting other messages: any message shaped like { :ok, response} will trigger that receive … in which case you want something less generic sent from the worker function such as { :background_job, jobid, result } or whatever. this will prevent other messages in your inbox from getting processed in place of the background job. Of course, if the Mod.timed_job code is run in its own dedicated process (and why not! :slight_smile: then this is not an issue.

1 Like

If it is a blocking job then yes definitely, two processes, but if it is not blocking (really it should not be if made properly) then one is good.

Although having a dedicated killer process can be reusable elsewhere. ^.^

Remember to mark expected messages back with a ref or so. ^.^

1 Like

Kudos. That’s one of the reasons why Elixir’s community is constantly growing.

1 Like

but if it is not blocking (really it should not be if made properly) then one is good

The reason for pushing lots of stuff to background jobs is exactly because they are not asynchronous. Take sending e-mail using SMTP as a classic example, processing images etc. Some things can’t be easily written in async manner, and - more importantly - you may not want to write them this way. Asymc code is - as a rule - more difficult to write and understand than sequential. This is exactly the reason why I’d choose Elixir over Node code any time.

1 Like