Ensure fairness in shared Oban queue

Hey there,

We have a setup where multiple customers are sharing a single queue to execute webhook jobs.
This leads to situations where 1 customer can clog up the queue if their webhooks are failing.

Is there a way to ensure “fairness” to resources in such a case?
We tried global limits on a unique job arg per customer but the problem with that is that if the resources are underutilized a customer is still limited to their set limit.
Would separate queues per customer per workload be the way here?

Thank you

There’s no way to allow “fairness” and allow bursting when there is more capacity available. That, along with local partitioning, is something we’ll be exploring for Pro v1.6 but that’s a long way off.

Using partitioned queues for various workloads is a potential middle ground. Or, if there are observable events you can trigger on, you could change the limits at runtime (and use DynamicQueues to persist those changes between restarts).