Resolution Error Handler for Absinthe

Greetings everyone!


My team is new to Elixir and using it and Absinthe to build a backend for frontend service (BFF) in which all data will be resolved using backend services. i.e. no local database or Ecto present.

We had the following design goal from the outset of the project. If the resolution of a query or field fails or crashes (i.e. timeout or circuit interruption), the resolution of the other fields should not fail.

In other words field or query resolution crashes:

  1. A well structured graphql error message is served to the user informing them why the field didn’t resolve
  2. Node resolution failure should be contained to the node. The rest of the graph should continue to process and be served to the client.

We’ve implemented the first attempt at meeting these design goals using a custom resolve macro and special middleware that traps errors in the resolver and sets errors on the fields if the resolver crashes.

Nevertheless, we’ve recently introduced the dataloader plugin using the KV source, and we’ve noticed that crashes in the dataloader plugin’s before_resolution hook, crash the entire request. We’d like to use dataloader more in the project to help improve our API call efficiency, but we’re not sure how to handle this situation in the most maintainable manner.


  1. If a dataloader batch fails to resolve (and the crash is handled through an egregious hack), it seems the dataloader will continue to try to run a pending batch, which seems to extend the resolution reduction because the pending_batches? state isn’t reset and it continues to attempt to resolve the failing batch. Is it possible to run the batch, handle errors and reset batch state in a middleware or plugin?
  2. It appears that this will need to be handled within a custom dataloader source since the pending_batches state needs to be reset/updated if a batch cannot be resolved. Is this the best approach?
  3. Is the original design goal a fool’s errand? :slight_smile:

Any input much appreciated. Thank you.

1 Like

Hey @gfmurphy!

This is a great set of questions :slight_smile: I think the stated design goal is excellent. It’s also entirely possible that there is a bug in Dataloader about the pending_batch? tracking. However there’s a fundamental challenge worth addressing here. Consider the following graphql query:

  posts {
    flags { value } 
    author { name }

Errors in a resolver

Absinthe takes a “data driven” approach to errors, which is to say that if you want to tell Absinthe about an error on the flags field (maybe the current user isn’t authorized to view the flags on a post) you do {:error, message} instead of raising. This you already know, but it’s worth emphasizing that Absinthe does not wrap anything in a try. If you’re running code that may raise, but still want Absinthe to resolve the field, you’ll need to catch the exception and turn it in to some {:error, message}.

Errors in a batch

Now, as you observed, this gets tricky with any kind of batch loading. If we assume for a second that the flags field on a post is just a JSONB column or something, it’ll get handled one at a time for each post.

If we’re trying to batch load the author though, we’re giving up on the idea of isolated execution. When before_resolution handles a batch, if the SQL is bad or something and there’s an exception, that’s going to ruin the whole batch. Now, ideally it would just ruin the users field, and not the entire request, or other batches.

If you’re using the KV source, you have full control over the batch function that runs. You can wrap whatever calls you’re making in a try, and then return I guess an empty map if an error happens. For the Ecto source this is not yet possible, although I’m in the middle of some refactoring that should make it doable.

This is a bit abstract, so please feel free to follow up.


As I’ve been playing with this, I think that the right way forward is for Dataloader sources to log errors that happen in batch functions, but not raise. Instead of raising it’ll just return an empty result map, and any subsequent lookups will simply result in nil. This should make it such that you do not need to do a try inside of any of the batch functions themselves.

I appreciate the quick response @benwilson512! I think the recent change in master looks good to me. Thanks!

1 Like