How to emulate upcoming `load_async` in Rails 7?

I was reading this blog post and found this part really interesting:

Rails 7 introduces a method load_async to schedule the query to be performed asynchronously from a thread pool. If the result is accessed before a background thread had the opportunity to perform the query, it will be performed in the foreground.

The implementations seems to schedule the operation in a thread pool so it executes in background without blocking the process. This seems really close to one of the BEAM strengths.

The example in the Rails PR would be equivalent to something like:

def index(conn, _params)
  # The posts and categories queries are run in paralell
  posts = Repo.all(posts_query)
  categories = Repo.all(categories_query)

  render(conn, posts: posts, categories: categories)
end

My first thought would be to wrap the Repo calls in Task.async and Task.await. But Task.await is blocking so, to get the categories we would wait for the posts, even though they may be used in a different order in the template.

So, my question is: does anyone have an idea about how to emulate this into a Phoenix application? Could this be an interesting functionality in Ecto? Does any of this make any sense :joy:?

1 Like

Seems like a pretty good fit for Task.await_many/2:

def index(conn, _params)
  # The posts and categories queries are run in paralell
  [posts, categories] = Task.await_many([
    Task.async(fn -> Repo.all(posts_query) end),
    Task.async(fn -> Repo.all(categories_query) end)
  ])

  render(conn, posts: posts, categories: categories)
end

EDIT: fixed the code; sorry didn’t try this out in a real app.

3 Likes

That’s going to be the tricky part: Having any access wait for actual data and return it. Implementing Enum and Access might get you part of the way, but you’ll still loose useful tools like pattern matching.

I would say that something like @stefanchrobot shown makes sense, as it allows to run several queries in parallel reducing load time. However in general lazy fetching like in load_async documentation:

If the result is accessed before a background thread had the opportunity to perform the query, it will be performed in the foreground.

IMHO doesn’t make much sense, as in Elixir the requests are naturally handled concurrently which is not always happening in Ruby (due to GIL).

3 Likes

As @hauleth notes, while this can improve the performance of a single request a little bit, it actually doesn’t improve the performance of the system as a whole in Elixir, because individual requests are already async WRT one another. In Rails this is not the case, so adding an extra async call in each request can improve the performance of the system.

However if you want to go forward with it @stefanchrobot I think has roughly the right solution.

4 Likes

This Rails feature is aimed to alleviate a weakness in its runtime; a weakness that Elixir doesn’t have. As @hauleth and @benwilson512 are pointing out, the potential wins are negligible while the complexity will explode, and developer ergonomics will be reduced.

There’s really no point in trying to emulate this in Elixir. Phoenix requests are already async enough. If you have a DB connection pool of 20, using 4 of them for a single request will make your other requests choke on DB connection starvation, on the off-chance that 1 out of 5 requests is a few milliseconds faster.

By using 1 connection per request we have more parallel bandwidth for users.

I get the idea but IMO any further innovation in that area should come from the databases themselves, e.g. maybe in the future you can tell PostgreSQL “I want all of these 5 queries executed inside a single connection, give me the results as they come in regardless of order”, a la HTTP 2 and 3 in-connection streams.

2 Likes

FWIW, Repo.preload/3 will already do something similar if passed a list of associations to preload when called outside of a transaction:

4 Likes