When do you need run_batch with Dataloader?


I’ve been using Absinthe and Dataloader for a few months now but I am still really struggling to get my head around when to reach for run_batch and what the limitations are of Dataloader.load, ect. I would massively appreciate if someone could run through when you should use run_batch and what the limitations are of things like Dataloader.load, ect.

One example I think may need run_batch is querying an entity but filtering by its status, where statuses are stored in an seperate append-only DB table. To do this I need to join with the status table on entity ID, sort the statuses by date, and then match the most recent status on the filter supplied (e.g. I need outdated entities). Does this seem like a good candidate for run_batch? The way I do it now is using the following filter query which seems very wrong and really relies a lot on subquery:

        |> join(:inner, [t], s in assoc(t, :statuses))
        |> distinct([t, s], s.tablet_id)
        |> select([t, s], %{id: t.id, status: s.status})
        |> order_by([t, s], desc: s.inserted_at)
        |> subquery()
        |> where([d], d.status == ^Atom.to_string(status))
        |> join(:left, [d, t], t in Tablet, on: t.id == d.id)
        |> select([d, t], t)
        |> subquery()

Bonus question:
Dataloader seems to be geared towards loading a batch of associated entities, e.g. if a workspace has pages dataloader makes it easy to load all of those associated pages. But what about if I just want to load one associated page, e.g. the most recent? What pattern do people use to achieve this?

Thanks in advance for the help!

I think another part that I don’t understand fully is the ability to modify the queryable in the context module. Let’s say I have:

  def query(Page = workspace, args) do

  defp page_query(args) do
    Enum.reduce(args, Page, fn
      {:order, order}, query ->
        query |> order_by({^order, :inserted_at})

And my schema is:

object :tablet do
    field :pages, list_of(:page), resolve: dataloader(Scribe.Pages)

When I get a list of workspaces and then the pages in those workspaces will the ordering be done globally - i.e. it orders all the pages in all the workspaces? Or will it order them within the context of a workspace - i.e. it orders the pages in workspace A, and workspace B, ect.

I know in the case of ordering it doesn’t matter if it’s global or local but say I wanted to limit it to the most recent page in each workspace. If I had limit(1) on the query it would limit it to returning one page across all the workspaces, not per workspace wouldn’t it? How would I limit on a workspace by workspace basis?

@benwilson512 Would love your advice, I’m struggling to really understand when to use Absinthe.Resolution.Helpers vs a custom resolver using dataloader.get and on_load vs a custom run_batch.