Preloading, some of the time, all of the time, none of the time?

I have a conflict.

It seems that the scaffolds imply that your repo interactions should be in a context, and your controllers operate via the contexts.

This makes sense, except that most of the time, the controller knows more about what it wants from the repo.

  • Do I want just the User record?
  • Or maybe I want the User and all their posts.
  • Or just the first 10 posts and the count on those posts comments.
  • Maybe I just want the ID and email?

I could just keep adding to my get_user!(id) function:

def get_user!(id) do
  Repo.one(from u in User,
 :preload [[posts:  :comments], [credentials: :key], :roles]

but that starts to look like a real bad idea, really quickly.

Do I start writing get_user_with_posts and get_user_with_posts_comments? Maybe pass down to those after passing an options list to get_user?

Really it feels like I should just be writing Ecto queries right in the controller, getting what I want for each action, but that feels like Im misbehaving.

Perhaps the cleanest way would be writing a lot of my own chainable ā€œquery buildersā€,

def query_for_user(id)
end
def query_user_posts(query)
end
...

query_for_user(1) |> query_user_posts() |> query_user_roles() |> query_run()

but that almost feels like Im just writing a bad abstraction around Ecto.Query, for maybe zero real gain.

I know the answer is most often ā€œdO whaT fiTs youR proJectā€, which isnā€™t a bad answer, but I guess Im just hoping for some more idiomatic guiding from better programmers.

3 Likes

Why do you think those are the same request and even more to be handled by querying for an user?

Do I want just the User record?

Accounts.get_user

Or maybe I want the User and all their posts.

Accounts.get_user + Blog.fetch_posts_by_author

Or just the first 10 posts and the count on those posts comments.

Accounts.get_user + Blog.fetch_latest_posts_by_author_with_comments

Maybe I just want the ID and email?

Accounts.get_user and donā€™t care about the few additional bits and boops you fetch from the db.

1 Like

I guess because Ive been wired to think in relations/associations (cough objectsā€¦), that my instinct is to want to ā€œdrill downā€ through a root node

But yes, just seeing it written like that makes a lot of sense. I have some data, I can get the related data but it doenā€™t have to be through that data, which also makes testing less of a setup/teardown nightmare.

I guess if I were to do something like:

# in a template

for user <- users do
   <%= user.name..
   for posts <- user.posts do
      <%= post.title...

I could instead do

for {user, posts} <- [{user, posts}...] do
   <%= user.name..
   for posts <- posts do
      <%= post.title...

I know itā€™s most often case by case, but is it common to work that way in Phoenix? Get my user, then get the posts, and any notifications, pass those to the view, instead of getting the user and relying on its structure?

I think I have to rethink some of my schemasā€¦ :slight_smile:

Edit: I guess join+preload isnā€™t possible in that style though. Do you just eat the cost until itā€™s an issue?

Whenever I have problems like that I like to look into OSS. Thatā€™s how hexpm does it, and I follow that pattern.

Edit: I realize that it doesnā€™t answer your question fully, but itā€™s worth checking how others deal with it.

Itā€™s likely going to be a mixture. See e.g. this hexpm code. It fetches data from various places, but within related functionality still uses preloads.

You are not writing an abstraction around Ecto.Query. You are making your own pipeline through which to enrich your query before it hits the DB. Pretty normal stuff, and a good practice too.

Have granular API. Be explicit. No magic, no implicit behaviour ā€“ resist those, always. If you need to add three separate preloads and a sorting clause, pipe them all in the right endpoint code, one by one. A little more writing? Meh. At least everyone who checks the code (including your future self) will grasp it from the first glance. Our time and brain attention are precious and scarce ā€“ their focus and efficiency must be prioritized.

Write code thatā€™s immediately recognizable by a human.

Looking at your post, you are on the right path. Hereā€™s how Iā€™d slightly rewrite your code:

Users.get_query(id)
|> Users.preload_posts()
|> Users.preloads_roles()
|> Repo.all()

I donā€™t think you need to wrap Repo.one or Repo.all in your own functions IMO ā€“ so your query_run() might be superfluous.

1 Like

I use graphql and let the controller (liveview) decide all of that without any need to anything but support batching in the service/context layer. :sunglasses:

1 Like

Agreed. GraphQL isnā€™t trivial to make cache-friendly but it absolutely excels at workflows like these where your queries can vary wildly and the frontend wants more control over them.

I try to be explicit using this patternā€¦

# Context
def list_posts(queries \\ & &1) do
  Post
  |> queries.()
  |> Repo.all()
end

def filter_posts_by_author(query, author)...
def order_posts_by_published_at(query)...
def preload_post_collaborators(query)...

# Controller
posts = Blogging.list_posts(fn query ->
  query
  |> Blogging.filter_posts_by_author(user)
  |> Blogging.order_posts_by_published_at()
  |> Blogging.preload_post_collaborators()
end)

Iā€™ll typically end up with list_*, get, and get! and other query functions that all take that optional argument. By default, the query will just be unscoped. The filter, order, and preload functions follow a consistent naming convention.

4 Likes

While I like this pattern because itā€™s quite ā€œergonomicā€ and gets the job done, doesnā€™t it leak the fact that youā€™re retrieving data from the db/working with a query?

Like, if you need to fetch some of the data before loading associations(like lateral preloads with limits, ie top 2 comments per blog post, or computing virtual fields), you have to first use that pattern and then pipe outside of the lambda:

posts = Blogging.list_posts(fn query ->
  query
  |> Blogging.filter_posts_by_author(user)
  |> Blogging.order_posts_by_published_at()
  |> Blogging.preload_collaborators()
end)
|> Blogging.preload_top_comments(limit: 2)

Not a big deal, but Iā€™m curious

Yes it does, and thatā€™s okay.

There are ways to abstract it further and Iā€™ve went down that rabbit hole several times. My conclusion is that most of the time itā€™s not worth it.

3 Likes

I concluded pretty much the same. Iā€™ve seen codebases like the one from changelog.org directly use the Repo in the controllers. So while itā€™s often recommended to ā€œhideā€ the repo and db stuff in the contexts, it seems in many cases convenience wins over that rule. So I more or less wanted to see if thatā€™s generally a concern.

Realistically, you can hide the DB aspect if you really want to ā€“ namely abstract away any Repo function calls. Using all the Ecto.Changeset machinery by itself doesnā€™t bind you to any DB at all. People are using changesets in database-less applications all the time, with great success ā€“ me included.

So thereā€™s still two layers at play and they are best captured by the both separate libraries: ecto and ecto_sql. As long as you are using stuff from ecto you can still make heavy use of its conveniences without being bound to a DB.

2 Likes

This is certainly no silver bullet, but I donā€™t think this pattern necessarily lends itself to leaking knowledge of the query/db. It is just that some of the function names (preload_*) and argument name (query) I have used does.

Youā€™re probably moreso getting at the concept of the controller needing to split that into separate operations though. I guess Iā€™m just not bothered at all by piping to Blogging.preload_top_comments. If I was somehow thrown into this codebase and could only see the public interface for Blogging, all Iā€™d know or care is that list_posts was a function that took an anonymous function with a pipe of other functions and that preload_top_comments takes a list of Post structs.

That being said, if some super-specialized operation/query/multi is better served by a well-named dedicated function Iā€™ll do that rather than trying to jam it into this pattern.

Iā€™ve also at various times tried to avoid leaking any reference to the database (token_operator for example), but the additional layer of abstraction can make it more difficult to reason about and refactor / find unused functions.

1 Like

What I did is having a helper function in my context, something like this:

  def preload_with(%User{} = user, keys) do
    user
    |> Repo.preload(keys)
  end

Whenever I need to preload, I will call this function:

user_with_posts = 
  user_id
  |> Accounts.get_user()
  |> Accounts.preload_with([:posts])

I am still at the early stage of my project where the behaviours are still not clearly defined yet.

1 Like

I wrote a blog post about another option yesterday:

https://kobrakai.de/kolumne/data-fetching-using-livecomponents/

3 Likes

Something I do in certain cases (which I have seen so far in the comments ā€¦) is to pass a preload option in that takes a simple list of atoms that represent the preloads I want. In the getter function, it passes the base query and the options passed in to a function that then dispatches on the preloads requested:

  defp preloads(query, opts), do: preloads(query, opts, Keyword.get(opts, :preload, []))

  defp preloads(query, _opts, []), do: query
  defp preloads(query, opts, [:boosts | rest]) do
    preloads(from(q in query, preload: :boosts), opts, rest)
  end
  defp preloads(query, opts, [_ | rest]) do
    preloads(query, opts, rest)
  end

You get the idea :slight_smile: I also pass in all the options, as you notice, as sometimes other options influence how a preload gets implemented.

The benefits of this approach, I have found, are several:

  • Lean: I only write the common getter functions, and donā€™t have to worry about every specialization as those are covered by the preload option
  • Stable: Adding more preloads doesnā€™t require changing the API at all
  • DRY: I can do ā€œfancyā€ things in those preloads, such as enforce relevant filtering or sorting, do sub-field preloading, etc. preventing (often fragile) duplicate code at the call-sites.
  • User-pay: If a caller doesnā€™t request any preloads ā€¦ it pays no performance penalties.
  • Explicit: It is clear at every callsite what data is being requested

I donā€™t do this for every schema, obviously. Just the ones that have larger sets of assocs and/or which get used a lot in the code base. It is a useful pattern I have noticed I used regularly, however.

1 Like

Itā€™s a nice idea to have a separate component for different data-loading ā€œcontextsā€ but I would be hesitant to add a virtual field to a schema just for something like job_count.

Me too. But itā€˜s what Iā€˜ve done in the past and I wanted to show a range of options even if not all are equally viable for the specific example. Take e.g. a computed value for the the virtual field for the schema data itself and it might suddenly feel much more doable.