oban version: 2.17.8
oban pro version: 1.14.0
i have the following script to enqueue a set of jobs:
entries |>
Enum.with_index() |>
Enum.map(fn {entry, seconds} ->
%{
"customer_po" => entry.customer_po,
"po_id" => entry.id,
"tp_id" => entry.tp_id
} |>
BFJob.new(schedule_in: seconds) |>
Oban.insert() |>
IO.inspect()
end)
the output of which looks like this:
{:ok,
%Oban.Job{
__meta__: #Ecto.Schema.Metadata<:loaded, "public", "oban_jobs">,
id: 181862420,
state: "scheduled",
queue: "default",
worker: "****",
args: %{"customer_po" => "8W7RQ1GV", "po_id" => 9375525, "tp_id" => 51},
meta: %{"uniq_key" => 28051777},
tags: [],
errors: [],
attempt: 0,
attempted_by: nil,
max_attempts: 12,
priority: 0,
attempted_at: nil,
cancelled_at: nil,
completed_at: nil,
discarded_at: nil,
inserted_at: ~U[2024-05-02 13:32:15.236658Z],
scheduled_at: ~U[2024-05-02 13:32:15.235557Z],
conf: nil,
conflict?: false,
replace: nil,
unique: nil,
unsaved_error: nil
}}
....
bunch of other entries with scheduled_at field spread apart by one second.
but when i look in the logs i see only a few jobs at the top are executed the others are never run.
i tried with insert_all as well but the effect is the same. the number of jobs being run is inconsistent between the runs even when the total number of scheduled jobs is the same between the runs. sometimes only 3 of them run other times 17.
wondering if i am doing something wrong or is this somehow expected behavior. it feels like oban discards the jobs where scheduled_at is missed.
also if i do not use schedule_in option then all jobs get to run eventually