Batch loading resources in Absinthe using dataloader for a non-ecto struct

In my Phoenix/Absinthe app I have an object in my graphql schema like this

object :vote_result do
  field(:id, non_null(:id))
  field(:score, non_null(:integer))
  field(:top_upvoted_user, non_null(:user))

I have a query whose purpose is to return a list of vote_results. vote_result is not part of my database schema and is queried from a non-trivial ecto query.


field :vote_results, type: non_null(list_of(:vote_result)) do


def get_all(_,_,_) do
   results = # non-trivial ecto query
   {:ok, results}

the results from my ecto query look like this:

    id: 1,
    score: 5,
    user_id: 14,
    id: 2,
    score: 8,
    user_id: 34,

I.e. they are simple the result of the query - they don’t correspond to any part of my schema.

I’d like to resolve a full user for the purpose of the vote_result. The naive way would to just add a resolution function to my :vote_result object:

object :vote_result do
    field(:top_upvoted_user, non_null(:user), resolve: fn _, %{source: result} -> Repo.get(User, result.user_id end)

however this results in the n+1 query problem where a separate SQL query needs to be performed to resolve each user in the list.

I’m already using dataloader to batch resolve nested Ecto relations but I’m not sure how to use it to resolve objects from a non-ecto source (i.e. just a map containing an ID of an ecto resource).

Can anyone steer me in the right direction? I’ve tried something like the following to no avail:

defmodule MyApp.DataLoader do
  alias MyApp.User
  import Ecto.Query

  def data() do, run_batch: &run_batch/5)

  def run_batch(_, _query, :top_upvoted_user, vote_results, repo_opts) do
    user_ids =, & &1.user_id)

    result =
      from(u in User,
        where: in ^user_ids
      |> MyApp.Repo.all(repo_opts)
      |>, fn id -> [Map.get(result, id)] end)

  # Fallback to original run_batch
  def run_batch(queryable, query, col, inputs, repo_opts) do
    Dataloader.Ecto.run_batch(MyApp.Repo, queryable, query, col, inputs, repo_opts)

I get the following error:

Request: POST /graphql
** (exit) an exception was raised:
** (Dataloader.GetError) The given atom - :top_upvoted_user - is not a module.

This can happen if you intend to pass an Ecto struct in your call to
dataloader/4 but pass something other than a struct.


I seem to have managed to solve this by using the Dataloader.KV source.

First, I defined a dataloader function to batch load my users:

defmodule MyApp.VoteResultDataLoader do
  def load({:top_upvoted_user, _}, vote_results) do
    users =
      |> &1.user_id)
      |> get_user_list()

    |> vr -> {vr, Map.get(users, vr.user_id)} end)

One thing that wasn’t obvious or well documented is that load/2 must return a Map whose keys are the original parent entities passed to the function. E.g.

  %{id: 1, score: 5, user_id: 14} => %MyApp.User{id: 123, ...etc}

Once I have my dataloader function I can add it as a source to my schema’s dataloader:


Finally, in my object resolution I just use the dataloader/1 function:

object :vote_result do
    field(:top_upvoted_user, non_null(:user), resolve: dataloader(:vote_results))

I’m not sure if I’ve arranged everything as well as it can be. I’m never that sure where to put this dataloader logic and to what part of my app it belongs in. But this will depend on one’s own application.


Hi! Late to the party, but I think there is a way to make it work by using undocumented %{batch: batch, item: item} dataloader interface.

object :vote_result do
          resolve: dataloader(User, fn %{user_id: user_id}, _, _ ->
          # This is an undocumented dataloader/2 interface
          # see
          %{batch: {{:one, User}, %{}}, item: user_id}

In this case as long as User source is added the dataloader will use it.