Multiple Job acessessing database

We have three docker instances for our application and one MySQL database. We also have background jobs running in each instance which read from the database.
Having three instances mean, three background jobs reading from the database and if one job picks a row from the database, we want other jobs to skip it.

We tried to mitigate this issues with optimistic lock (using lock_version in ecto). we have a column as job_staus which is set once the job begins and if there is lock_version mismatch update would fail to indicate that some other job has already started for given row.

  row
      |> Ecto.Changeset.change(%{status: “job_started”})
      |> Ecto.Changeset.optimistic_lock(:lock_version)
      |> Repo.update!()
      |> start_job()
      |> update_status(%{status: “job_finished”})
    rescue
      Ecto.StaleEntryError ->
            #job was started in another process.
              :ignore
    end

Is there any better way of handling this ?

MySQL: locking reads

Given the above information within a short transaction I’d fetch a single job through a SELECT with a pessimistic lock (i.e. SELECT ... FOR UPDATE) and update it’s status.

You wouldn’t be having this problem if you could re-architecture the background processing - i.e. one process going through the table assigning work to available workers (i.e. get rid of the concurrency issue altogether).

3 Likes