I’m running batches of short-lived fire and forget tasks (I sometimes receive feedback from them).
I can kill the tasks from within themselves… but was hoping to use the supervisor to kill long-running
tasks for me?
I’ve tried variants of the commented options, but have yet to find a way to get them to actually timeout and die .
Am I missing some basic setting or concept here?
def work(some_values) do
fn v -> do_something(v) end,
# timeout: 50
# shutdown: 50,
# on_timeout: :kill_task
It turned out that an underlying library was trapping exits on spawned processes. So – these processes did end – but produced orphaned processes that made it look non-functional
What library was this and what purpose was that for I’m curious?
So, it was an odd combo.
I’m working with :luerl – and in particular the :luerl_sandbox module which lets you limit the number of reductions used by a process running Lua code.
I had inadvertently been testing without passing in the reductions limit, which defaults to infinite reductions. But I suspect that if you do that it was trapping the exit? (I’m not yet up on Erlang )
When I switched to “normal” :luerl it seemed to work ok. Again, this is more theory than understanding…
luerl_sandbox sounds quite useful, right now I run user lua code in a new process and if it does not complete within 5 seconds then I kill it and tell the user their script took too long. I’d love a way to suspend, serialize it out, back in, and resume it though!
Ohhh that’s a really cool idea! I’ll bring that up on the Luerl Slack if you like!
Another feature I’d appreciate is having it return the reduction count that was used by a script – so it would be easy to see how heavily your script is taxing the cpu.
If you want, check out https://hex.pm/packages/sandbox. It’s my attempt to wrap :luerl_sandbox (still very much in progress though).