[Absinthe] GraphQL Performance Issues

Hey everyone! New to the forum, not new to Elixir. :slight_smile:

In any case, recently had a discussion with @benwilson512 on overhead during GQL requests, and then saw a related question in the Absinthe channel on Slack, so decided to open it up here for input/guidance.

Problem

We’ve been tracing our API’s performance in Datadog and have seen some oddities that we can’t quite diagnose. Pictures are worth 1000 words, so here are some flame graphs for one request that demonstrates the issue:

The overall flame graph

1st Flame: Token Auth (in our Context)

2nd Flame: GQL Resolution

What’s happening in the gaps after auth/context and after the actual resolution?
This is just one request, but many of our requests have this shape, where the majority of the request has something unknown happening for at least 50ms or so. In some cases, we have 100ms or so unaccounted for, usually on the tail-end of the request post-resolution.


Environment

  • App is released via mix release to AWS (EC2 machine) and deployed behind an ELB (https).
  • Elixir 1.9.1, Erlang 22, Absinthe v1.5.0, Absinthe Plug v1.5.0-rc2, Absinthe Phoenix 1.5.0-rc.0
  • The app is behind a Phoenix Router, Phoenix Endpoint, but is an API server.

Is anyone else seeing things like this? Happy to answer any other questions to help figure this out!

5 Likes

One key thing to check on is what your scalars are doing. Scalars are serialized in the result phase, which happens entirely after the resolution phase. In the past, poorly optimized serialization functions have lead to lengthy post-resolution times.

To dig in further, some additional metrics would be handy. Can you provide an example query document? How large is the result JSON in bytes? How large is the schema (approximated by MyAppWeb.Schema.__absinthe_types__ |> map_size)?

3 Likes

Thanks for responding, @benwilson512!

Here’s a gist for a sample GQL doc that would generate the flame graphs in my original post. I’ve taken some fields out for IP purposes, but that’s generally the shape. Most types you can assume the type of. There are aggregate types when it makes since (user, for example). Also of note, we’re not currently using any custom serialization.

The result of __absinthe_types__ |> map_size is 34, which doesn’t seem to be that large to me, but perhaps that’s incorrect?

Update
So I’ve shaved some time off because I wasn’t .uniq-ing IDs in batch loaders, but there are still two sizable time gaps for which we’re unable to account:

  1. after context is added, before resolution
  2. after the resolver is run.

Hi @derekbrown I still need some sense of the size of the returned document. This will help diagnose time spent after the resolution phase constructing the actual JSON result.

While

I recommend using Telemetry to provide a few more data points, specifically the [:operation, :start] event would be very helpful because right now although it is likely that Absinthe starts work immediately after the context is set, in theory there could be other plugs or issues within Absinthe.Plug that are introducing delay. See https://hexdocs.pm/absinthe/telemetry.html#content for details.

The document is relatively small, so processing on the document should happen pretty quickly. I’m going to take that document and create a project with a similar schema to see if we can reproduce the observed performance issues.

3 Likes

Sorry about that! Missed the question in your first reply, @benwilson512 . The return document size ranges depending on the query variable, obviously, but the return document for the graphs above was 155kb. FWIW, we see the same “lag” regardless of the size of query.

We have Telemetry events already setup; I can get that to you ASAP. Not a bad idea to add those events to the tracer as well. If I’m able to do that easily, I’ll update this thread with new graphs that show the start & stop events on the operation.

As a side note, it will be super interesting to see this data alongside a PersistentTerm schema once we’re able to release that. :wink:

1 Like

Update: I’ve been able to add Absinthe Telemetry events to the tracer, per reply above from @benwilson512 . Below are some observations and pretty pictures. I’ve tried to keep the same query here from the original post, for some apples-to-apples comparison. Hope this all helps.

  • You’ll notice there’s a significant reduction in speed. The query document remains the same, along with the response size, more or less (if anything, the response is likely bigger). Part of this is attributed to some logic/DB work we’ve been doing underneath the query, and some of it is directly attributed to the .uniq issue I mentioned earlier. Hope that helps anyone else with looking to increase performance.

Overall shape of the query. The first flame of spans is the context loading, followed by the operation.start event, followed by the first resolution.start event. We’ll zoom in subsequent screenshots, but there’s a significant gap between those two events, and after the operation.stop event.


Context to Operation Start. There’s a miniscule gap (≈300µs) between context being added and the operation starting, but not significant enough for us to blink.


Operation Start. The span on the right of the screenshot is the first resolution.start event, which is followed by the fetch_messages query in the overall screenshot above. Would we expect there to be a ≈6ms delay or so between operation start and resolution start?


Operation Stop. There’s a ≈10ms gap after operation.stop that’s unaccounted for. I’ll dig into what could be happening there (perhaps other plugs?). I don’t assume so there’s any cleanup or anything that Absinthe would be doing after the operation, is there?

Edits

  1. Maybe a clue, or could be a rabbit trail: it seems that the delay to context matters whether or not it’s a query or mutation. The gap from trace start to context being added is much less for mutations.
  2. The above is a rabbit trail. Doesn’t seem to be anything deterministic causing the delay before Context gets called.
3 Likes