Batch loading a field in absinthe with dataloader

I have an object in my Absinthe graphql schema that looks like this:

object :match do
    field(:id, non_null(:id))
    field(:opponent, non_null(:string))
    @desc "The number of votes that have been cast so far."
    field(:vote_count, non_null(:integer), resolve: &MatchResolver.get_vote_count/2)
    # etc...

I’m using a resolver for vote_count that performs an ecto query using the parent match. This would run into the n+1 query problem however if a list of matches is queried. It currently looks like this:

  def get_vote_count(_root, %{source: %Match{} = match}) do
    count = match |> Ecto.assoc(:votes) |> Repo.aggregate(:count, :id)

    {:ok, count}

I’m already using dataloader to batch load child entities but I’m can’t seem to get a custom run_batch function to work when using the Absinthe.Resolution.Helpers.dataloader function provided by Absinthe.

What’s the recommended approach for implementing custom batch queries using dataloader/ecto? Can someone give an example, including the schema definition part?

This github issue has an example of using a custom batch function to perform an aggregation.

The key is the %{batch: _, item: _} map passed to the dataloader helper, which I haven’t found documented anywhere except that issue :man_shrugging:

def run_batch(_, query, :post_count, users, repo_opts) do
  user_ids =, & &
  default_count = 0

  result =
    |> where([p], p.user_id in ^user_ids)
    |> group_by([p], p.user_id)
    |> select([p], {p.user_id, count("*")})
    |> Repo.all(repo_opts)

  for %{id: id} <- users do
    [Map.get(result, id, default_count)]

# Fallback to original run_batch
def run_batch(queryable, query, col, inputs, repo_opts) do
  Dataloader.Ecto.run_batch(Repo, queryable, query, col, inputs, repo_opts)

Called from the GraphQL schema like:

field(:post_count, non_null(:integer) resolve dataloader(Posts, fn user, _args, _ ->
  %{batch: {{:one, Post}, %{}}, item: [post_count: user]}