A new article showcasing some of Oban, and Oban Pro’s, distinct agentic workflow abilities.
Nice post Parker! Is the theme you used for the code snippets publicly available? I really like it!
Also, loving the new https://getoban.pro site/design!!
The markdown engine we’re using is mdex, which is Rust backed, and it uses autumnus for syntax highlighting. They have an extensive set of built-in themes, and we’re using the nord theme because it matches the site design.
Thank you! @sorenone and I had a blast building it together ![]()
Thanks, @AstonJ ! We really wanted to invoke a sense of happiness when you saw the redesign.
It does that really well
I also like how it reflects both of your personalities too (kinda bubbly/full of life - as per one of the video interviews I saw of you both a while back) ![]()
If the site is built in Phoenix/Elixir, how you built/approached it could make a good blog post too…
Thanks Parker - that’s a nice theme!
Slightly off topic, but what kind of scale has everyone achieved with Oban? We wanted to give it a try but our requirements are a bit … extreme .. and we couldn’t build a Postgres anywhere near big enough for it.
Rolled our own onto of mongo for the moment.
A bit off topic, but happy to share based on the information we’ve received. There are Oban Pro customers that report running 100 million+ jobs a day for a single app. We’ve pushed over 1 billion in a day on good hardware with some custom pruning.
What do you consider extreme?
around about 1.5 million a second? Although we are looking for durable execution at a slightly more fine level than job, kinda like temporal but that wouldn’t scale anywhere near that either.
As a reference, my Rust crypto mini-career some years ago involved a single k8s pod (2 vCPU & 4GB RAM) ingesting ~150K events / sec from Kafka and putting them in InfluxDB. We used every trick in the book outside of going back to C (which would net us no more than 1% extra speed anyway) and couldn’t do more than ~170K events / sec and at this point the code became ugly and difficult to maintain.
We had to have ~50 such nodes.
Granted that was back in 2020, nowadays you can likely achieve 5x that on a single afforable beefy-ish node but 1.5M jobs / sec seems like something you need to specifically engineer for.
I was wondering how long will it take you two to pitch ObanPro for agentic workflows – apparently not at all long. Congratulations!
It’s a pretty good marketing pitch, I liked it.
I’d argue that we did that back in April, with the original cascading workflows article ![]()
Thanks, that’s appreciated! It’s part of our broader “Oban for AI” angle.
We are pretty much there with a large (sharded) mongo cluster handing history events. The actual job processing is happening on ~500 external jvms, this is just scheduling/retry/durability really which isn’t all that heavy, just extremely concurrent. I swear BEAM feels like a superpower sometimes, although it is exactly the right fit for this component which helps. :}
Using the BEAM is a superpower already. ![]()






















