How to handle Absinthe object data that's spread over multiple ecto schemas


What’s the recommended best practice for dealing with the situation where the graphql object you want to return requires data that is spread over multiple DB tables and therefore schemas?

One option is just to get all the data I need from an ecto query, create a custom map and return that. However I found that sometimes returning structs and other times returning maps from my context methods makes downstream logic a little more cumbersome. For example I have the following absinthe object:

  object :block do
    field(:public_id, non_null(:string), name: "id")
    field(:content, non_null(:json))

    field :author, :user do

The get_user resolver then pattern matches on the parent fields. If the parent had a user_id than I will fetch that user, otherwise I will fetch the user in the resolution context:

  def get_user(%_{user_id: id}, _args, %{context: %{current_user: user}}) do
    if == id do
      {:ok, user}
      {:ok, Accounts.get_user(id)}

This pattern matches on a struct of any kind, but if I start returning maps than I need to account for that too by creating another function.

The other way I was thinking was to add an embedded schema and that I can use instead of a plain map. I don’t have any particular need for casting or validating this data though so it kind of feels like I’m using a struct for no real reason.

How has everyone else dealt with this?

If you just make this change:

-  def get_user(%_{user_id: id}, _args, %{context: %{current_user: user}}) do
+  def get_user(%{user_id: id}, _args, %{context: %{current_user: user}}) do

Then your code will work just fine with both structs and arbitrary maps.

As for your original question I see no way out but use SQL JOINs (Ecto assocs). Embedded schema would work as well but it seriously depends if you want to do queries on those models or not. If you do, they shouldn’t be embedded (most of the time).

Thanks, for some reason I thought that wouldn’t match, bit silly.

Hmm, I don’t think joins would solve the problem by themselves. I can get all the data I need using joins (which I’m currently doing) and that data is returned as individual schema structs, but the final result that I want to return to the user is a combination of each of those structs. So I can make a map as follows and return that:

# Let's say my query returned three structs, %A{} = a, %B{} = b, and %C{} = c
object_to_return = %{
    owner: b.owner,
    revision: c.revision
{:ok, object_to_return}

This feels very not nice, as I may need the same object returned from multiple resolvers. I guess I was hoping that there was a nice pattern to take data spread across multiple tables and map that to an absinthe object.

Also I have another question! For child fields that have a resolver that needs to fetch more data depending on the parent, how do you usually deal with this? I want to avoid having a ton of very similar resolvers that only really differ by what they are expecting from the parent field. For example:

object :example do
    field :owner, :user do
      resolve(fn example, _, %{context: %{current_user: user}} ->
         {:ok, Accounts.get_user(example.user_id)}

It feels like I will have a ton of resolvers for getting a user that only differ by what is returned from the parent. And the parent resolver than has to be sure to return the user_id (in this case), not because that is part of it’s return object but because the child resolver needs it. It just feels like this is slowly getting more and more difficult to handle and I don’t know how to keep it under control.