Need help with oban_peers - why is it deleting and re-inserting the node?

I’ve enabled Oban log config and I’m seeing something I don’t understand.

When I run my project (locally), I see logs along these lines

[20:15:49.203] [][info] QUERY OK source="oban_peers" db=1.4ms
DELETE FROM "public"."oban_peers" AS o0 WHERE (o0."name" = $1) AND (o0."expires_at" < $2) ["Oban", ~U[2024-01-16 16:45:49.201291Z]]
[20:15:49.204] [][info] QUERY OK db=0.5ms
INSERT INTO "public"."oban_peers" AS o0 ("expires_at","name","node","started_at") VALUES ($1,$2,$3,$4) ON CONFLICT ("name") DO UPDATE SET "expires_at" = $5 [~U[2024-01-16 16:46:19.203105Z], "Oban", “something@somewhere", ~U[2024-01-16 16:45:49.203105Z], ~U[2024-01-16 16:46:19.203105Z]]
…
[20:15:50.357] [][info] QUERY OK source="oban_producers" db=1.0ms
UPDATE "public"."oban_producers" AS o0 SET "updated_at" = $1 WHERE (o0."uuid" = $2) [~U[2024-01-16 16:45:50.355843Z], <<38, 205, 144, 244, 57, 14, 67, 115, 131, 245, 169, 53, 186, 26, 162, 39>>]
[20:15:50.358] [][info] QUERY OK source="oban_producers" db=0.2ms
DELETE FROM "public"."oban_producers" AS o0 WHERE (o0."uuid" != $1) AND ((o0."name" = $2) AND (o0."queue" = $3)) AND (o0."updated_at" <= $4) [<<38, 205, 144, 244, 57, 14, 67, 115, 131, 245, 169, 53, 186, 26, 162, 39>>, "Oban", “queue_name", ~U[2024-01-16 16:44:50.355843Z]]

And these happen every couple of seconds. To be exact

  • The second logs about oban_producers happen every 3 seconds exactly.
  • The first one about oban_peers happen less, but still a couple of times every minute.

As I take it, oban_peers contains the nodes running Oban. I’m using one node, why is it deleting and re-inserting the only node?

I also don’t see much being done in the query of oban_producers.

Why these are happening? Is this something that Oban internally does (updating stats, pruning, something?) or this can be caused by some configurations?

Here’s the Oban config I’m using (log: info is added temporarily for debugging).

Also as you can see, I’m using Oban Pro if it matters.

config :myapp, Oban,
  log: :info,
  engine: Oban.Pro.Queue.SmartEngine,
  repo: Repo,
  plugins: [
    Oban.Pro.Plugins.DynamicLifeline,
    Oban.Pro.Plugins.DynamicPruner,
    {Oban.Pro.Plugins.DynamicCron,
     crontab: [
       {"* * * * *", Worker1},
       {"0 * * * *", Worker2},
       {"5 0 * * *", Worker3}
     ]}
  ],
  queues: [
    default: 100,
    # Other queues only have numbers, no further options
  ]

The oban_peers table is used for centralized leadership. Oban has no way of telling how many nodes are in your cluster, of whether the previous leader is still alive. The existing leader will periodically (~15s) update its row in the database to keep it fresh and retain leadership. See the Oban.Peers.Postgres docs for more.

In a similar vein, each queue’s producer will update it’s row in the table to indicate that its still alive. Simultaneously, it will delete any outdated producers that haven’t been touched for a while. The producer record is essential to safely rescuing jobs with the DynamicLifeline, so it’s essential that the records keep up-to-date.

These are all internal queries that are essential to how Oban, and Pro, operate. It’s possible to disable some of those queries with alternative configuration, but there’s nothing wrong with how it’s running currently :slightly_smiling_face:

1 Like

I see now how it’s working, and that it’s normal. We’ll tune them if we need them, thanks for pointers.

Is the oban_producers queries run by each node, or just by leader?

We’re seeing a spike of these queries recently. Although there are other things at play too, we’re trying to understand how much of the spike is “normal” and how much can be reduced.

Those are ran by every queue on every node. If you’re running more queues then you’d see more of those queries.

Note that the recently release Pro v1.3 drastically reduces the number of queries that touch the oban_producers table.

1 Like

We’re already pretty outdated in terms of version. Oban is 2.12.1 and Pro is 0.11.1. We’re planning to upgrade too.

Thanks for the help.