Unique job handling


I am using Oban [Pro] for all of my queues.

I have been hitting a wall on one of my queue functionalities. I have the following conditions:

  • Users update their data for a certain resource quite frequently. There are many instances where the user triggers an update action before a job completes. I have a unique clause by the keys user_id and resource_id
  • I need the job to execute one at a time when that uniqueness clause is true.
  • If a job with the user_id and resource_id is already running, I would like to queue the new job to run after the already running job is completed.
  • If this happens more than once and there’s already another job in the queue with the same keys, overwrite the one that’s queued to start next. Example:
    • Job 1 is executing
    • Job 2 with the same user_id and resource_id is queue to run next
    • We receive job 3 with the same user_id/resource_id. I need Job 2 to be replaced with job 3 (and job 4,5,6…etc do the same)

This is what I have:

        queue: :test_queue,
        unique: [period: :infinity, fields: [:args], keys: [:user_id, :resource_id]],
        replace: [:args]
      |> Oban.insert()

However with this set-up, the new jobs don’t get stored when there is a unique conflict.


All you’re missing is setting the unique states. You only want to consider :available and :scheduled states for uniqueness (and maybe :retryable).

  period: :infinity,
  fields: [:args],
  keys: [:user_id, :resource_id],
  states: [:available, :scheduled]
1 Like

Ah I see, that did it, thank you!

Just to confirm, for the config, I have the following set-up:

test_queue: [
          local_limit: 10,
          rate_limit: [allowed: 1, period: 30, partition: [fields: [:args], keys: [:user_id, :resource_id]]]

Seems to be fine from initial testing, but this would work globally across nodes as well, correct? Capping each job to run per user at 30 seconds.

1 Like

That’s right. You’ve got it!

1 Like

Awesome, thank you for your help!