Oban Pro DynamicCron configured at runtime doesn't seem to work

I moved my Oban config to run at runtime (from config.exs to runtime.exs) because I need to load different configurations depending on the release I’m generating.

After that move, I noticed that my cron jobs stopped running at all.

Here is my config:

defmodule Core.Config.Oban do
  @moduledoc false

  alias Core.Workers.{ProcessPropertyViewCountWorker, ExpireShortUrl}

  import Config

  def load!(:marketplace) do
    plugins =
      default_plugins() ++
        [
          {Oban.Pro.Plugins.DynamicCron,
           crontab: [
             {"@hourly", ProcessPropertyViewCountWorker, args: %{interval: "1h"}},
             {"@daily", ExpireShortUrl}
           ]}
        ]

    config :core, Oban,
      plugins: plugins,
      queues: [
        sms: 5,
        email: 10,
        process_images: 3,
        expire_short_url: 2,
        remove_images: 1,
        property_view: 1,
        property_over_under_paid: 10,
        hubspot_appointment: 1
      ]
  end

  def load!(:pacman) do
    config :core, Oban,
      plugins: default_plugins(),
      queues: [
        process_data: 1,
        process_raw_properties: 10,
        process_raw_records: 10
      ]
  end

  # Default to marketplace when running on local
  def load!(nil), do: load!(:marketplace)

  defp default_plugins do
    [
      {Oban.Pro.Plugins.DynamicLifeline, timeout: :infinity},
      {Oban.Pro.Plugins.DynamicPruner,
       state_overrides: [
         completed: {:max_len, 10_000},
         cancelled: {:max_age, :infinity},
         discarded: {:max_age, :infinity}
       ],
       timeout: :timer.hours(10)}
    ]
  end
end

as you can see, if release is :marketplace it will add DynamicCron to the config with some crontab jobs.

Here is what the DynamicCron.all/0 returns to me:

iex(marketplace@node1.backend.core)1> Oban.Pro.Plugins.DynamicCron.all
[
  %Oban.Pro.Cron{
    __meta__: #Ecto.Schema.Metadata<:loaded, "public", "oban_crons">,
    name: "Core.Workers.ProcessPropertyViewCountWorker",
    expression: "@hourly",
    worker: "Core.Workers.ProcessPropertyViewCountWorker",
    opts: %{"args" => %{"interval" => "1h"}},
    paused: false,
    insertions: [~U[2024-05-06 03:00:00.133548Z],
     ~U[2024-05-06 02:00:00.807445Z], ~U[2024-05-06 01:00:00.546187Z],
     ~U[2024-05-06 00:00:00.245858Z], ~U[2024-05-05 23:00:00.958641Z],
     ~U[2024-05-05 22:00:00.703080Z], ~U[2024-05-05 21:00:00.419005Z],
     ~U[2024-05-05 20:00:00.159163Z], ~U[2024-05-05 19:00:00.832437Z],
     ~U[2024-05-05 18:00:00.552334Z], ~U[2024-05-05 17:00:00.315184Z],
     ~U[2024-05-05 16:00:00.985450Z], ~U[2024-05-05 15:00:00.732213Z],
     ~U[2024-05-05 14:00:00.480575Z], ~U[2024-05-05 13:00:00.163874Z],
     ~U[2024-05-05 12:00:00.836540Z], ~U[2024-05-05 11:00:00.503022Z],
     ~U[2024-05-05 10:00:00.225494Z], ~U[2024-05-05 09:00:00.893348Z],
     ~U[2024-05-05 08:00:00.651463Z], ~U[2024-05-05 07:00:00.365975Z],
     ~U[2024-05-05 06:00:00.067545Z], ~U[2024-05-05 05:00:00.756578Z],
     ~U[2024-05-05 04:00:00.464613Z], ~U[2024-05-05 03:00:00.178809Z],
     ~U[2024-05-05 02:00:00.890213Z], ~U[2024-05-05 01:00:00.612366Z],
     ~U[2024-05-05 00:00:00.293341Z], ~U[2024-05-04 23:00:00.991272Z],
     ~U[2024-05-04 22:00:00.735534Z], ~U[2024-05-04 21:00:00.439186Z],
     ~U[2024-05-04 20:00:00.191324Z], ~U[2024-05-04 19:00:00.885261Z],
     ~U[2024-05-04 18:00:00.646027Z], ~U[2024-05-04 17:00:00.385363Z],
     ~U[2024-05-04 16:00:00.109405Z], ~U[2024-05-04 15:00:00.761237Z],
     ~U[2024-05-04 14:00:00.482335Z], ~U[2024-05-04 13:00:00.066512Z],
     ~U[2024-05-04 12:00:00.754370Z], ~U[2024-05-04 11:00:00.378746Z],
     ~U[2024-05-04 10:00:00.103423Z], ...],
    lock_version: 2,
    parsed: nil,
    inserted_at: ~U[2024-04-16 00:34:00.051817Z],
    updated_at: ~U[2024-04-16 00:34:00.051817Z]
  },
  %Oban.Pro.Cron{
    __meta__: #Ecto.Schema.Metadata<:loaded, "public", "oban_crons">,
    name: "Core.Workers.ExpireShortUrl",
    expression: "@daily",
    worker: "Core.Workers.ExpireShortUrl",
    opts: %{},
    paused: false,
    insertions: [~U[2024-05-06 00:00:00.245858Z],
     ~U[2024-05-05 00:00:00.293341Z], ~U[2024-05-04 00:00:00.095649Z],
     ~U[2024-05-03 00:00:00.529412Z], ~U[2024-05-02 00:00:00.702775Z],
     ~U[2024-05-01 00:00:00.933030Z], ~U[2024-04-30 00:00:00.478345Z],
     ~U[2024-04-29 00:00:00.512367Z], ~U[2024-04-28 00:00:00.877888Z],
     ~U[2024-04-27 00:00:00.490409Z], ~U[2024-04-26 00:00:00.811880Z],
     ~U[2024-04-25 00:00:00.476959Z], ~U[2024-04-24 00:00:00.775340Z],
     ~U[2024-04-23 00:00:00.176549Z], ~U[2024-04-22 00:00:00.993786Z],
     ~U[2024-04-21 00:00:00.715989Z], ~U[2024-04-20 00:00:00.894875Z],
     ~U[2024-04-19 00:00:00.785755Z], ~U[2024-04-18 00:00:00.085533Z],
     ~U[2024-04-17 00:00:00.544110Z]],
    lock_version: 2,
    parsed: nil,
    inserted_at: ~U[2024-04-16 00:34:00.053784Z],
    updated_at: ~U[2024-04-16 00:34:00.053784Z]
  }
]

What I’m doing wrong here?

1 Like

Ok, I found the issue, I had another node getting all the jobs running there. Removing that node fixed the issue

2 Likes

This is a common problem in hybrid setups where some nodes run jobs and others don’t. Common enough that there’s a section in the troubleshooting guide about it.

In short, there are two options:

  1. Run plugins on all nodes, even on the ones that aren’t executing jobs. Most plugins only run on a single leader node, and they add minimal load.
  2. Disable nodes that don’t run plugins with plugins: false or ensure it isn’t a possible leader with peer: false.
1 Like