Hi!
I was wondering if there was a way to hook into the oban flows. The purpose is to create a plugin like thing that can handle data for oban jobs separate from the oban jobs table.
Thanks,
Eli
Hi!
I was wondering if there was a way to hook into the oban flows. The purpose is to create a plugin like thing that can handle data for oban jobs separate from the oban jobs table.
Thanks,
Eli
Can you give some more details about your use case?
There may be a way to accomplish it now between c:Workflow.new/2
, c:Oban.Pro.Worker.after_process/3
, and the upcoming c:Oban.Pro.Worker.before_process/1
.
I want to build a library that extends the Oban API to handle large amounts of data. Currently throwing everything into args where args is large makes inserting slow because of the unique index. Ideally I can get it to a point so that I can write something like
Job.new(%{data: large_payload})
|> Oban.insert()
where data is handled separately whether it be s3, another table, or what have you. In the worker process code we would have the same data back.
def process(%{data: large_payload}), do: #whatever
The post run hook is so clean up can be done.
Between new/2
and after_process/3
there’s enough hook points to accomplish what you’re after:
defmodule MyApp.Worker do
use Oban.Pro.Worker
args_schema do
field :data_uuid, :uuid
end
@impl Oban.Worker
def new(%{data: data}, opts) do
uuid = Oban.Pro.UUIDv7.generate()
MyApp.upload_object(uuid, payload)
super(%{data_uuid: uuid}, opts)
end
@impl Oban.Pro.Worker
def after_process(:complete, %{args: %{data_uuid: uuid}}, _result) do
MyApp.delete_object(uuid)
:ok
end
def after_process(_state, _job, _result), do: :ok
@impl Oban.Pro.Worker
def process(%{args: %{data_uuid: uuid}}) do
{:ok, data} = MyApp.get_object(uuid)
...
end
end
It’s not fully automatic, but it should do the job