Oban: having many dynamic queues okay or a bad idea?

Working on it now that some major :ox::razor: is finished. Of course, there’ve been many changes behind the scenes to improve queue option validation and serialization, but the groundwork is there.

As soon as something is ready for testing, I’ll announce it here :+1:

2 Likes

Perfect, thank you so much!

Partitioned rate-limiting landed in Pro Smart Engine — Oban v2.9.0. Thanks for all the support! :yellow_heart:

3 Likes

Thank you so much! That looks great!

I found an interesting use case and am wondering is it possible to support it using the Oban?

Telegram messenger has the following rate limits:

  • 30 messages per second for different users
  • 1 message per second for one particular user
queues: [
  messages: [
    local_limit: 30,
    rate_limit: [allowed: 1, period: 1, partition: [fields: [:args], keys: [:user_id]]
  ]
]

As far as I understand, the queue definition from above will not guarantee that It won’t exceed the rate limit of the telegram (30 RPS) if I schedule 60 workers with unique user_id’s instantly.

Thanks!

1 Like

You’re correct, that queue definition won’t guarantee a multi-level rate limit. It would only satisfy the per-user limit. You could approximate the 30 messages per second limit by either:

  • Reducing your concurrency (lower the local_limit so that it’s less likely to work through 30 jobs in a second). Snooze for a second in the event that a request is rate limited by Telegram.
  • Using non-partitioned rate limiting (allowed: 30, period: 1) and sharding users between a handful of pre-defined queues. As long as users always end up in the same shard then they’ll never exceed the 1-per-second limit.

Neither option satisfies both constraints perfectly, but depending on your usage and priorities, it’ll get you close.

Got it! Thank you very much!

Hi,

I would like to ask kind of the same question as OP, as I know that Oban has changed in between. I have multiple tenants that can be scaled to a number of workers, dynamically. I also have a requirement that for a given tenant and thing-ID (I cannot disclose the “thing” but it does not matter I guess), only one job can be executing at a time. Basically the same logical worker would have to handle all jobs dispatched for that thing-ID.

I was wondering if I could create a queue with a single worker for each logical worker of the tenant. If Tenant Inc. (with id 123) has two logical workers, then I would create two queues named “123-0” and “123-1”. With the current state of polling and PG notifications in Oban, would that be too much overhead, and would that be too heavy on the database for 10, 100 or 1000 queues? I am not too worried about performance except if it is of another order of magnitude slower.

Thank you.

You can definitely shard queues that way if you have dozens of queues, but it’s not ideal for hundreds or thousands of queues.

The next meaningful Pro release will have global partitioning built-in, which will give you the exact behavior you’re looking for. In the meantime, with the partitioned rate-limiter, you can fake it if you’re not concerned about processing speed.

1 Like

That is good to hear, thank you.

Quoting myself

I also have a requirement that for a given tenant and thing-ID (I cannot disclose the “thing” but it does not matter I guess), only one job can be executing at a time.

I am trying hard to get rid of that requirement anyway, because it is the single point that made all solutions impracticable (Oban, RabbitMQ or even a simple GenStage setup).