I’ve been slowly getting up to speed on GraphQL using Absinthe and I wondered if anyone had done work developing a dynamic query that would dynamically control which columns were selected based on which columns a GraphQL query asked for. I.e. instead of doing a SELECT *
, do a SELECT a, b, c
depending on the exact parameters that were asked for in the GraphQL query. It took me a very long time to develop an Ecto query that could dynamically handle filter parameters, so I am hoping someone has some nice pattern they could share to do the same type of thing for the select part of the query. Is thinking about that over-optimizing GraphQL?
I believe you’re looking for Absinthe.Resolution.project/1
source: https://github.com/absinthe-graphql/absinthe/issues/287
Wow, yes, that’s it. That doesn’t appear to have been mentioned in the Absinthe book. Do you happen to have an example of the output of project()
being used by an Ecto query?
It depends a bit on your implementation but there is a small example mentioned in Absinthe.Resolution.project/2
I think this should be the first step: https://gist.github.com/hl/f19ae0bdbc981a24bac3d7fe170134c7
Example output:
[
%{
"edges" => [
%{"node" => [%{"shop" => [%{"domains" => ["host", "status"]}]}]}
]
}
]
The next stap would be to filter out the keys that you actually need and use them in your Ecto query
Just remember that your Absinthe schema doesn’t necessary have the same fields as your Ecto schema. You’ll need to build something that handles the translation for you.
Example:
I have a Product
which contains a currency
and a price
field. In my Absinthe schema I have a field display_price
that creates a string based on the currency and price fields. In this case you need to translate display_price
into currency
and price
when selecting the fields in your Ecto schema.
Can you explain why it took a long time?
Very likely: yes.
When working with database-backed applications, it’s important to keep in mind that databases are awesome, and can do a lot of hard work for you. For example: they otpimize queries behind the scenes and utilize advanced caching.
The easiest way to sabotage the database’s ability to cache things is to explode number of different queries it receives. If each query you send to database is perfectly hand-crafted to what user wants to see, and there is big variance in queries users can issue, database will not be able to cache as much.
This is why sometimes a very effective optimization is to actually load more data from your database, and then filter it out within Elixir code. That’s precisely what will happen automatically for you if you don’t change default behavior of Absinthe here: all the fields from the database rows will be loaded from database, database will be able to effectively cache this, but you will transfer over the wire only subsed of data, whatever the user requested.
My rule of thumb, is to always load all the columns from database, unless there are some large amounts of data/text/blobs in these columns, and only think about optimizing if if it becomes an issue.