How to approach OBAN Pro testing workflow with 3 workers?

Good Day ,

I have a specific workflow with ,for example (Workers A,B and C) . So when worker A executes successfully, it records results which have to be used by worker B and C.
So now I want to do Unit testing on Worker B and C , how would you advise in going about that ? Because now when I run maybe a perform_job on my unit test , where will it get those recorded result since an Oban Unit test shouldn’t be hitting the DB , How do I make it pass ???

If a job is relying on the output of another, then an integration test may be appropriate. You can test all of the jobs together with run_workflow/2:

assert %{completed: 3} =
         Workflow.new()
         |> Workflow.add(:a, MyFlow.new(%{}))
         |> Workflow.add(:b, MyFlow.new(%{}), deps: [:a])
         |> Workflow.add(:c, MyFlow.new(%{}), deps: [:b])
         |> run_workflow()

If the logic in those jobs is complex or the test requires too much setup, you can inject data through the args. The trick is using args_schema with the :term type, so you can stub any value:

defmodule MyApp.MyFlow do
  use Oban.Pro.Workers.Workflow

  args_schema do
    field :id, :id
    field :result, :term
  end

  @impl true
  def process(job) do
    {:ok, result} = fetch_lazy(job)
    ...
  end

  defp fetch_lazy(%{args: %{result: nil}} = job) do
    job
    |> Workflow.all_jobs(only_deps: true)
    |> List.first()
    |> fetch_recorded()
  end

  defp fetch_lazy(%{args: %{result: result}}), do: {:ok, result}
end

Then use perform_job/3 as you normally would in your test:

assert {:ok, _} = perform_job(MyFlow, %{id: 123, result: {:some, :thing}})
2 Likes

I don’t quite understand what you mean by saying “Oban Unit test shouldn’t be hitting the DB” - what’s wrong with that?

Anyway, in Surfer, we ended up doing the following:

  1. We insert a completed dependency job (A) into the database, with all the meaningful meta fields pre-filled (e.g. workflow_id, name, recorded & return) where the return is manually set to Oban.Pro.Utils.encode64(whatever).
  2. We insert the tested job (B) with the same meta.workflow_id and meta.deps = [job_a_name].
  3. We run drain_jobs. I guess perform_job would also do the trick here. The job B will then have access to the recorded results from job A when it executes.

After wrapping it in some helpers, you can end up with something like this:

%{foo: :bar}
|> MyWorkerB.new()
|> run_with_deps(
  job_a: %{return: :baz}
)


Hi , I saw you said you dont understand what is meant by “Oban Unit test shouldn’t be hitting the DB”, I saw on the hex docs . See attached screenshot.

The discrepancy seems to be around the term “unit”. Pure tests that don’t touch the db are a fine ideal, but the sandbox makes db interaction so quick and isolated that it’s not always worth it.

I’d argue that a workflow test, especially one that relies on fetching results of another job, should be tested with the database involved.

1 Like

Thanks for confirming it, I always use manual testing mode and almost never the inline one.

Hi @sorentwo , thank you for the response . So In what situation will you use the oban unit test? or recommend one .

I always thought of that this way: these unit-test Oban helpers just call Worker.process/perform, without inserting the job to the database to execute it, as Oban would “normally” do.

But regardless of this behaviour, if the worker code (implementation in process/perform) itself has to reach the database - and it has if it tries reading the results of a job it depends on - it’s something different and 100% legit.

2 Likes

In this situation I would integration test the workflow (as described above).