Calling function from my absinthe graphql nested schema

My schema is pretty simple, just trying to learn Absinthe a little better.

I have a Listing and a User that created that listing. I want to know how many listings total that user has sold.

  object :listing do
    field(:id, :id)
    field(:title, :string)
    field(:status, :string)
    field(:user, :user, resolve: dataloader(MyApp.DataloaderRepo))
  end

  object :user do
    field(:id, :id)
    field(:name, :string)
    # field :total_sold, :integer 
  end

In my module I have a helper function that gives me the count of sold listing per user.

def total_sold(user_id) do
  query =
    from u in User,
      join: listing in assoc(u, :listings),
      where: listing.user_id == ^user_id,
      where: listing.status == :sold,
      select: count(listing)

  Repo.one!(query)
end

How can I leverage this function to prevent N+1 queries in Absinthe if my graphql query is like:

{
  listings {
    id
    title
    status
    user {
      id
      name
      totalSold
    }
  }
}

I think Absinthe’s batch middleware could get you what you want: Absinthe.Middleware.Batch — absinthe v1.6.3

I’ve only used it for direct children of the list type (a field on your listings object, for example), but I would think it would work a little more nested.

1 Like

Thank you that helped me get it done. I’ll post in detail how I solved this for others.

First here is what my Absinthe object ended up looking like:

object :user do
  field(:id, :id)
  field(:name, :string)

  field :total_sold, :integer do
    resolve(fn user, _, _ ->
      batch({MyApp.Accounts, :total_sold}, user.id, fn batch_results ->
        {:ok, Map.get(batch_results, user.id)}
      end)
    end)
  end
end

Notice how I call my MyApp.Accounts.total_sold function by using {MyApp.Accounts, :total_sold} notation.

The second param of batch is the field I will send my total_sold function. If I use user.name, my function would have received something like: ["Foo", "Bar", "Baz"]. I needed the ids, so I used user.id.

Finally batch_results has the return of MyApp.Accounts.total_sold. Let’s look at that.


I don’t quite like this because it makes my context aware it’s being called from a Graphql schema, but I can live with this for now. Open to suggestions.

The user_ids has the collection of all the user ids fetched. Then I can group and count the sold listings.

def total_sold(_, user_ids) do
  query =
    from u in User,
      join: listing in assoc(u, :listings),
      where: listing.status == :sold,
      where: u.id in ^user_ids,
      group_by: u.id,
      select: %{user_id: u.id, sold_listings: count(listing)}

  listings = Repo.all(query)

  Map.new(listings, fn listing -> {listing.user_id, listing.sold_listings} end)
end

The important bit is the final Map.new.

I am ultimately returning something like:

%{
  "43c8421e-69e8-4769-aa46-15229e97dc99" => 1,
  "651da703-bdb6-4625-9fdb-ffc4b2a701ab" => 1
}

Where the user.id is the Map key, the total sold counts are the values.

So let’s go back up a level to my object definition.

object :user do
    field(:id, :id)
    field(:name, :string)

    field :total_sold, :integer do
      resolve(fn user, _, _ ->
        batch({MyApp.Accounts, :total_sold}, user.id, fn batch_results ->
          {:ok, Map.get(batch_results, user.id)}
        end)
      end)
    end
  end

Now that Map.get makes sense, it’s fetching the user.id key value, which is… the total sold count! Boom we’re in business!

Hope this helps and I know it will help me 2 years from now when I totally forget this haha

1 Like

I’ve been experimenting with ways to add limits to subfields while keeping it to one DB call. Right now I’m using union_all but it feels like I’m over-engineering this.

query {
  boxes(limit: 2) {
    chocolates(limit: 2) {
      happiness
    }
  }
}

Building the query like this:

    Enum.map(box_ids, &chocolates_query(&1, limit))
    |> Enum.map(&subquery/1)
    |> Enum.reduce(&Ecto.Query.union_all(&1, ^&2))
    |> Repo.all()

Leads to a SQL query like this:

SELECT id, happiness, box_id FROM 
  (SELECT id  happiness  box_id FROM "chocolates" WHERE (box_id = $1) ORDER BY RANDOM() LIMIT $2) 
  UNION ALL 
  (SELECT id, happiness, box_id FROM (SELECT id  happiness  box_id FROM "chocolates" WHERE (box_id = $3) 
ORDER BY RANDOM() LIMIT $4) s0)

It works, and it’s fast (as long as your outside limit doesn’t get too high), but I’m probably missing something about the toolset that will limit subfields for me.

Full schema below:

defmodule CandyFactory.Schema do
  use Absinthe.Schema
  alias CandyFactory.{Repo, Box, Chocolate}
  import Ecto.Query

  @default_limit 100

  object :chocolate do
    field :happiness, :string
  end

  object :box do
    field :color, :string

    field :chocolates, list_of(:chocolate) do
      arg :limit, :integer

      resolve fn box, args, _resolution ->
        limit = Map.get(args, :limit, @default_limit)

        batch({__MODULE__, :batch_fetch_chocolates, limit}, box.id, fn batch_results ->
          {:ok, Map.get(batch_results, box.id)}
        end)

      end
    end
  end

  query do
    field :box, :box do
      resolve fn _args, _resolution ->
        {:ok, fetch_boxes(1) |> hd()}
      end
    end

    field :boxes, list_of(:box) do
      arg :limit, :integer

      resolve fn args, _resolution ->
        limit = Map.get(args, :limit, @default_limit)
        {:ok, fetch_boxes(limit)}
      end
    end
  end

  defp fetch_boxes(limit) do
    from(b in Box, order_by: fragment("RANDOM()"), limit: ^limit)
    |> Repo.all()
  end

  def batch_fetch_chocolates(limit, box_ids) do
    Enum.map(box_ids, &chocolates_query(&1, limit))
    |> Enum.map(&subquery/1)
    |> Enum.reduce(&Ecto.Query.union_all(&1, ^&2))
    |> Repo.all()
    |> Enum.reduce(%{}, fn item, acc ->
      list = Map.get(acc, item.box_id, [])
      Map.put(acc, item.box_id, [item|list])
    end)
  end

  defp fetch_chocolates(box_id, limit) do
    chocolates_query(box_id, limit)
    |> Repo.all()
  end

  defp chocolates_query(box_id, limit) do
    from(
      c in Chocolate,
      where: c.box_id == ^box_id,
      order_by: fragment("RANDOM()"),
      limit: ^limit
    )
  end
end
1 Like