Paginating associations with Absinthe Relay connections

Most articles on Absinthe suggest the use of dataloader when dealing with associations.
The reason being it batches successive associations and handles them all within a single query.

This does indeed work fine for most cases, but certainly not when doing pagination.
That is because one would ideally want to limit and offset per-association, which in turn
means applying pagination on a per-join basis and is not possible really with regular joins.
A feasible solution could be doing lateral joins with paginated subqueries, but because of
obvious performance reasons and because it is not completely supported within ecto and
its adapters, I guess we can agree not to go that way and look for a different approach.

Therefore, I am wondering. Specially to those doing GraphQL in production, how do you
approach this issue and paginate your associations exactly? Much appreciated! :slight_smile:

2 Likes

Here is one association I am using in a GraphQL schema…

  node object :player do
    field :internal_id, :integer, do: resolve &resolve_internal_id/2
    field :last_name, :string
    field :first_name, :string

    connection field :games, node_type: :game do
      arg :order, type: :sort_order, default_value: :asc
      arg :filter, :game_filter
      resolve &ChessResolver.list_player_games/3
    end

    # Timestamps
    field :inserted_at, :naive_datetime
    field :updated_at, :naive_datetime
  end

and the corresponding resolver.

alias Absinthe.Relay.Connection
...
  def list_player_games(_, args, %{source: player}) do
    Chess.list_player_games_query(player, args)
    |> Connection.from_query(&Chess.process_repo/1, args)
  end

I can query game association with usual relay filter (first, last etc.)

I thought pagination was one of the main reason for using Relay :slight_smile:

That does work indeed but say you were doing a second connection within the games connection.
If you were to do all connections this way, you’d end up querying the database multiple times, once
per successive connection, and running into the so called N+1 problem. That’s why dataloader comes
in handy, because it batches the associations and turns them into a single query automagically. :slight_smile:

That’s why I was wondering if there was a solution really besides just doing separate queries…
If otherwise, I’d also be looking for a generic resolver that would work for all associations, so that
I wouldn’t have to care really about coding each of the resolvers manually. :thinking:

1 Like

Most of the new code use dataloader. It was not useful in the simple case I had… but I would use it now (code is quite old)

A work-around that is working for me is to use dataloader for all queries, but if the GraphQL request is using pagination arguments on the specific resource, then add the parent_id to the dataloader args so that dataloader knows that it should not batch the SQL queries together. E.g. something like this:

import Absinthe.Resolution.Helpers, only: [dataloader: 2]

object :user do
  field :id, non_null(:id)
  field :name, non_null(:string)
  field :items, list_of(non_null(:item)) do
    arg :page, :integer
    arg :per_page, :integer
    arg :sort_field, :item_sort_field
    arg :sort_order, :item_order
    arg :filter, :item_filter
    resolve dataloader(Items, fn %{id: id}, args, _ ->
      {:items, case args do
        %{page: page, per_page: per_page} when is_integer(page) and is_integer(per_page) ->
          Map.put(args, :user_id, id)
        _ -> args
      end}
    end)
  end
end

This solution still results in the N+1 problem for this specific resource while using pagination on this resource, but you can still leverage batching for other resources in the query that are not being paginated.

1 Like