Oban: Jason.Encoder not implemented for %Bamboo.Email

I’d like to queue the mailing of an email via Bamboo but this code results in a Protocol.UndefinedError because it seems that Oban doesn’t know how to encode the %Bamboo.Email{} struct that contains the email body etc. Am I doing this correctly? If I am, I don’t understand why if Oban needs to encode the data which it’s about to queue. Wouldn’t every struct be unique and therefore there should be a way to handle this in the general case?

caller code:

defmodule Elijah.Emails do
  import Bamboo.Email
  use Bamboo.Phoenix, view: ElijahWeb.EmailView

  alias Elijah.Accounts.Jobs.WelcomeEmail

  @sender_no_reply {"Elijah", "no-reply@elijah.app"}
  @supported_locales ~w(en)

  def welcome_email(%{user: user, url: url}) do
    base_email()
    |> subject("Elijah - Welcome!")
    |> to(user.email)
    |> assign(:user, user)
    |> assign(:url, url)
    |> render_i18n(:welcome_email)
    |> premail()
    |> WelcomeEmail.new()
    |> Oban.insert()
  end

...

Oban job:

defmodule Elijah.Accounts.Jobs.WelcomeEmail do
  use Oban.Worker, queue: :mailer, max_attempts: 4

  @impl Oban.Worker
  def perform(email, _job) do
    # IO.inspect(email)
    email
    |> Elijah.Mailer.deliver_now()
    :ok
  end
end

stack:

        (jason 1.2.1) lib/jason.ex:199: Jason.encode_to_iodata!/2
        (postgrex 0.15.5) lib/postgrex/type_module.ex:897: Postgrex.DefaultTypes.encode_params/3
        (postgrex 0.15.5) lib/postgrex/query.ex:75: DBConnection.Query.Postgrex.Query.encode/3
        (db_connection 2.2.2) lib/db_connection.ex:1148: DBConnection.encode/5
        (db_connection 2.2.2) lib/db_connection.ex:1246: DBConnection.run_prepare_execute/5
        (db_connection 2.2.2) lib/db_connection.ex:539: DBConnection.parsed_prepare_execute/5
        (db_connection 2.2.2) lib/db_connection.ex:532: DBConnection.prepare_execute/4
        (postgrex 0.15.5) lib/postgrex.ex:214: Postgrex.query/4
        (ecto_sql 3.4.5) lib/ecto/adapters/sql.ex:630: Ecto.Adapters.SQL.struct/10
        (ecto 3.4.6) lib/ecto/repo/schema.ex:661: Ecto.Repo.Schema.apply/4
        (ecto 3.4.6) lib/ecto/repo/schema.ex:263: anonymous fn/15 in Ecto.Repo.Schema.do_insert/4
        (ecto_sql 3.4.5) lib/ecto/adapters/sql.ex:875: anonymous fn/3 in Ecto.Adapters.SQL.checkout_or_transaction/4
        (db_connection 2.2.2) lib/db_connection.ex:1427: DBConnection.run_transaction/4
        (oban 2.1.0) lib/oban/query.ex:47: Oban.Query.fetch_or_insert_job/2
        (elijah 0.1.0) lib/elijah/accounts/user_notifier.ex:17: Elijah.Accounts.UserNotifier.deliver_confirmation_instructions/2
        (elijah 0.1.0) lib/elijah_web/controllers/user_registration_controller.ex:17: ElijahWeb.UserRegistrationController.create/2
        (elijah 0.1.0) lib/elijah_web/controllers/user_registration_controller.ex:1: ElijahWeb.Us (truncated)

thanks for any help,
Michael

Note that

  • Oban stores the job args in the db as json - so that job worker can retrieve them later
  • Jason rejects to converts a struct to json.

So just need to convert the struct to the map. Please note that if your struct is from ecto schema, the struct has more internal state and metadata for ecto…

There are two approaches

  • Pass minimal information to background job, and do the all thing in the background job
  • Pre-process as much as possible, and then pass only background job info to the queue

In most cases I prefer the former for following reasons

  • all work is in one function
  • your oban worker is very simple - just convert job args to value (with error handling, such as user is removed in the meantime) and call the function
  • it’s often easier to scale workers than the caller side (e.g. api side)

Here is a pattern I’m using:

defmodule MyEmails do
  def welcome_email(%User{} = user, url) do
     # actual job
  end 
end

defmodule MyEmailJob do
  use Oban.Worker, queue: :email

  def build(%User{id: user_id}, url), do: new(%{user_id: user_id, url: url})

  def perform(%Oban.Job{args: %{"user_id" => user_id, "url" => url}}) do
    user = MyApp.find_user!(user_id) # better to handle when user is already gone :)
    MyEmails.welcome_email(user, url)
  end
end

MyEmailJob.build(user, url) |> Oban.insert()

You can see:

  • context module does not need to know anything about oban
  • oban worker module is responsible to transfrom args between elixir app and oban

Whether to pass only reference or the full args - it really depends. For example, if you need the exact information when job is placed - then you should pass that information as job args, instead of delaying fetching them by job worker.

6 Likes

Yeah thanks for the detailed explanation. I get it now, Oban should be treated like a “pass-through”. I wasn’t quite understanding it’s design/usage but your reply helped a lot. The email worked.

For others with similar problems, here is my code.

  def deliver_confirmation_instructions(user, url) do
    WelcomeEmail.build(user.email, url) |> Oban.insert()
    {:ok, %{to: user.email, url: url}}
  end
defmodule Elijah.Accounts.Jobs.WelcomeEmail do
  alias Elijah.Emails
  use Oban.Worker, queue: :mailers, max_attempts: 4

  def build(email, url), do: new(%{email: email, url: url})

  def perform(%Oban.Job{args: %{"email" => email, "url" => url}}) do
    Emails.welcome_email(%{email: email, url: url})
  end
end
defmodule Elijah.Emails do
  import Bamboo.Email
  use Bamboo.Phoenix, view: ElijahWeb.EmailView

  @sender_no_reply {"Elijah", "no-reply@elijah.app"}
  @supported_locales ~w(en)

  def welcome_email(%{email: email, url: url}) do
    base_email()
    |> subject("Elijah - Welcome!")
    |> to(email)
    |> assign(:email, email)
    |> assign(:url, url)
    |> render_i18n(:welcome_email)
    |> premail()
    |> Elijah.Mailer.deliver_now()
    :ok
  end

...

Storing Elixir term has its own pros and cons. For the illusion of “transparently passing elixir term”… here are downsides:

  • hard to use the job queue by non-Elixir app
  • hard to implement uniqueness based on job args

For your information:

  • ecto_job supports erlang/elixir terms in args with configuration (storing them in bytea not jsonb)
  • You may try poison which supports struct <=> json conversion… but I’m not sure it would work well with Oban.
2 Likes

FWIW you can encode Erlang types into JSON by doing:

property = my_property_struct
bin_property = :erlang.term_to_binary(property) |> :erlang.binary_to_list()

PropertyJob.new(%{
  "property_id" => property.id,
  "property_address" => property.address,
  "bin_property" => bin_property
})
|> Oban.insert()

When decoding just do:

property = :erlang.list_to_binary(bin_property) |> :erlang.binary_to_term()

I have a Property struct with some nested structures with some tuples in them. It is, I think, cleaner to do this than to do a custom JSON encoder / decoder.

Note, that am pulling out the property.id and property.address for the Oban args so I can unique the job on property ID and inspect the address if something goes wrong.

I could store what I have of the property in the db first and reference it with the id, but I was unable to do that because of some other requirements. With this I’m able to just pass it around in Oban, and finally store it when I’m ready.

Not sure where this breaks down but it’s working for me for now.

1 Like