Simple Job Queue - limiting my app to executing a single job at the time

lately I’ve had some trouble with a very simple task, which is limiting my app to executing a single job at the time. On the previous days I managed to implement something like that using Oban, everything went smoothly and great on my local machine, however when I deployed to Heroku, I did no expect that the application would behave so weird due to the 3 Dynos 2X I currently use. After failing with that implementation after being deployed, I need to replace it with something that does that, and nothing else, I will really miss all the features Oban has, but I just need to execute a single Job at once, despite having multiple requests, those will have to be enqueued until the job is done in order to proceed with a new job. Do you guys something that can help with my current situation? BTW, I’ll have to stick to Heroku for now, I wish I could deploy on Gigalixir and keep the Oban implementation, I’m sure it could work on a regular instance, but for now I do not have other options of deployment.

I am not an expert with Oban, but what about setting the maximum concurrency of your queue to 1 as explained here?

Sorry, I edited this, I realized that concurrency limits are per node. My bad :slight_smile:

2 Likes

I think you should be able to nominate one of the dynos to run the Oban worker that consumes from the queue, while the other dynos don’t run the worker process.

3 Likes

Is that even possible? :astonished:

create a new heroku app - set an ENV eg OBAN_WORKER => true - change your code to only launch the oban worker if said env variable is true - (copy other ENVs share the db on heroku etc) and deploy your code to the new heroku app - make sure you only scale to ONE instance…

yes and no - you do have the $DYNO env variable Dynos and the Dyno Manager | Heroku Dev Center however multiple instances can have say web.1 doing deploy/reboot…

I thought Heroku had built in support for independently scaled worker processes without needing a whole new app? https://devcenter.heroku.com/articles/background-jobs-queueing#process-model

1 Like

yeah @mbuhot - just found that one - never used it…

@tovarchristian21 try and add a worker: to you procfile eg
web: MIX_ENV=prod mix phx.server
worker: MIX_ENV=prod OBAN_WORKER=true mix phx.server

change the code so the oban worker only starts on OBAN_WORKER == true

and scale heroku ps:scale web=3 worker=1

The documentation for “Unique Jobs” seems promising. Maybe enforcing uniqueness for your job only in the :executing state would do what you’re looking for?

+1 what @sorentwo said, I read the documentation sideways

With the uniqueness setting you can definitely guarantee that only one job executes globally. Limit it to state: [:executing] and fields: [:worker] with a short period, like 60 seconds.

I take that back. It would only ensure that you couldn’t insert new jobs while others were executing. As I mentioned in the other thread, if you want a truly global queue you need to control how many nodes start the queue, like @outlog suggested.

Can you please go a bit deeper on the answer. Apparently the Heroku documentation is only specified for a few languages: https://devcenter.heroku.com/articles/background-jobs-queueing. I guess I understand now that there are also workers that are defined exclusively for long-running proceses or expensive tasks (like the ones I need), however, how can I specifically tell that the worker accepts the job from the queue?, that part is unclear to me… or is it an Oban configuration that I’ll have to do?

Using the example that @outlog posted you’ll end up with two dyno types: web and worker. They will run the exact same code but the goal is to have only the worker run your jobs.

This is a variant on splitting-queues where you either run a queue or you don’t. Here’s how you’d define it in your application.ex:

defmodule MyApp.Application do
  @moduledoc false

  use Application

  alias MyApp.Repo
  alias MyAppWeb.Endpoint

  def start(_type, _args) do
    children = [
      Repo,
      Endpoint,
      {Oban, oban_config()}
    ]

    Supervisor.start_link(children, strategy: :one_for_one, name: MyApp.Supervisor)
  end

  defp oban_config do
    opts = Application.get_env(:my_app, Oban)

    if worker_enabled?() do
      opts
      |> Keyword.put(:crontab, false)
      |> Keyword.put(:queues, false)
    else
      opts
    end
  end

  defp worker_enabled? do
    System.get_env("OBAN_WORKER") == "true"
  end
end
5 Likes

That was exactly what I needed. Thank you very much. Only replaced the :queues to false it the worker_enabled?/1 returned false… but aside from that, I never thought each node ran the same supervision code, it was something very transparento to my eyes. Thanks to everyone that helped me out! :fist_right:t2::fist_left:t2:

2 Likes

Thanks, that is really useful and clean.

What will do Oban if it is started without queues and without the crontab? Could we just not start it at all?

That’s perfectly fine. Queues and plugins aren’t required for Oban to start. Separately, crontab was deprecated many versions ago and simply translates into the Cron plug-in.

Yes but I mean why would you want to start it then?

There are a few possible reasons. Here are a few off the top of my head:

  1. Maybe you want to run the web dashboard without executing jobs.
  2. Maybe you want to dynamically start queues.
  3. Maybe you want to insert jobs without running any queues.

Oh right yes. I was assuming “workers” were just queue runners but that makes sense.

Slightly off topic but I am interested in inserting jobs from other programs, which is supported I believe. But If I read the code correctly, using Oban.insert requires Oban to be started and that provides telemetry and using custom “engines”. Inserting from another app is roughly equivalent to using Repo.insert, correct?

Using Oban.insert does require a running Oban instance. You can insert jobs freely with Repo.insert, but you won’t get telemetry, unique support, dynamic repos, or automatic prefixing.

1 Like