Scheduling Oban jobs and awaiting them from LiveView

I would like to schedule a few jobs from a LiveView and await their results. I am a pro subscriber and can use Relay plugin for that.

It’s not super straightforward, however, as I need to start additional process that will be waiting for the jobs to finish and that will send() back the result to LiveView.

There surely must be a ready to use better way to do it? Seems like common pattern to start Oban jobs from LiveView and then receive their results in handle_info? Or is it not and I need to roll my own?

Phoenix.PubSub and handle info is the way to go.

No need for extra processes, just broadcast to topics

That’s not what I want for several reasons, including the error handling and custom plumbing that has to be done.

You don’t count wrangling processes as custom plumbing?

You have given no information indicating that your liveview subscribing to a topic you put in the job that the job broadcasts to when done is sub-optimal.

Your initial thoughts seem to be a partial reinventing of the existing PubSub wheel imo.

I would like to handle jobs that crash or get cancelled. Sending a message to PubSub from within the job does not guarantee that it will arrive, or be even sent, because the job may have crashed or be cancelled before it reaches broadcast.

I did mention Relay (Oban.Pro.Relay — Oban Pro v1.5.2) because it is Oban’s own mechanism to start async jobs and listen to their result - whatever it may be. You get notified if the job executes properly, if it crashes, if it gets snoozed or cancelled.

Relay’s await function is, however, blocking. I believe it calls receive and pattern match on the PubSub message to catch.

Now, I cannot really listen to all messages like that from my liveview handle_info, because I simply have too many of such messages in the system (hundreds per second).

The solution I came up with, is to start a process from LiveView, that will block on Relay.await() and when it receives message, it sends it back to the parent LiveView.

This works, but I just had the feeling that I am re-inventing the wheel.

Why cant you just broadcast on cancel and crash as well?

That’s right, it uses pubsub + receive to await a message about that job. You’ll receive a message even without the await portion and you can receive a message in the current process through handle_info if you like. In that case you’ll potentially receive messages from other relayed jobs though, and you’ll need to have some ref or identifier to match up the result.

That’s a fine way to handle it. It prevents the LiveView from receiving a bunch of pubsub messages it doesn’t need to know about.

4 Likes

I am not sure why you would receive hundreds of messages if you are listening to a topic dedicated from that job only. Maybe I did not understand something.

Could you publish in pubsub the pid of the job process when the job starts, so your liveview process can monitor it, and be notified if when the job crashes. And optionally also publish when the job is done so you get the result directly instead of fetching the data when you get the DOWN message.

Thanks! So I guess my approach is the correct one then. I’ll make myself a small wrapper around it then so I can re-use across the project!

1 Like

I have multiple objections for this approach but my main one is that you are adding custom code to the job itself, for the purpose of monitoring its progress/status. You need to wrap the thing in begin/rescue or like you said monitor the PID (which can be on remote node), rather than having a worker focus on it’s work alone.

The required functionality already exists on the Oban side and it doesn’t require me to modify any of my jobs in any way to handle it - the issue is about monitoring them in a nice way from LiveView itself.

I think @sorenone confirmed my approach is correct, so I’ll just carry on with custom LV integration I’ve done.

2 Likes

If you have some persistent state I guess it could be nice to put the job ID in here and get Oban to give you a pid from a job ID. So you can monitor it from liveview.

In both of our suggested approachs I guess it will not work if your liveview restarts.

Thats why I suggested a topic and broadcast, as you can restart a node in the middle of execution and it still works as intended as long as the topic is in the url or your session (or something else persistent to you and current session).

I absolutely do not agree with supplying metadata relevant to broadcasting on a job being any sort of bloat or irrelevant work still, but it’s not my code so I am not going to die on a hill about it either :stuck_out_tongue:

Main reasoning is that you are already doing async work backed by an ACID store with a first class citizen store for metadata (job args), then why not piggyback some metadata so your state is restart proof and persistent?

With the state as in state = I am actively monitoring this thing.

You would still miss the state update if you are offline while it happens, but my suggestion can recover. The accepted solution cannot (unless I am not understanding some detail here, if so I very much would love it if someone explained it to me hehe).

1 Like