How to create a "key" from a query/filter to enable caching?

Is there some way to make an key (can be a hash, a string, whatever, as long as it is consistent with other query with the same filters/params) from an Ash.Query struct?

Basically I want to add caching support for a resource, right now I have this (which is based on @zachdaniel cache in AshHq)

defmodule CacheAgent do
  @moduledoc false

  alias Core.Cnpj.LegalNature

  use Agent

  def start_link(_) do
    Agent.start_link(fn -> nil end, name: __MODULE__)
  end

  def get(query) do
    %{filter: filter, offset: offset, limit: limit, sort: sort} = query

    Agent.get_and_update(__MODULE__, fn state ->
      state = fetch_state!(state)

      output = 
        with {:ok, results} <- Ash.Filter.Runtime.filter_matches(LegalNature, state, filter) do
          results = results |> maybe_apply_offset(offset) |> maybe_apply_limit(limit) |> apply_sort(sort)

          {:ok, results}
        end

      {output, state}
    end)
  end

  defp maybe_apply_offset(results, nil), do: results
  defp maybe_apply_offset(results, offset), do: Enum.drop(results, offset)

  defp maybe_apply_limit(results, nil), do: results
  defp maybe_apply_limit(results, limit), do: Enum.take(results, limit)

  defp apply_sort(results, sort), do: Ash.Sort.runtime_sort(results, sort)

  defp fetch_state!(nil), do: Ash.read!(LegalNature)
  defp fetch_state!(state), do: state
end

Then I use like this in a preparation:

defmodule Core.Cnpj.LegalNature.Actions.CachedRead.Preparations.CheckCache do
  @moduledoc false

  use Ash.Resource.Preparation

  def prepare(query, _, _) do
    Ash.Query.before_action(query, fn query ->
      case CacheAgent.get(query) do
        {:ok, results} -> Ash.Query.set_result(query, {:ok, results})
        {:error, _} -> query
      end
    end)
  end
end

This works, but I don’t like the idea of having to filter the full list (Ash.Filter.Runtime.filter_matches) every time I ask an input from the agent (actually, doing this is slower than just getting the results directly from the DB without cache).

I would much prefer to be able to send the query as a key to a cache and just get the result back, something like this:

Cachex.fetch(:my_cache, query, fn query ->
  case query |> Ash.Query.set_argument(:use_cache?, false) |> Ash.read() do
    {:ok, _} = output -> {:commit, output}
    {:error, _} = error -> {:ignore, error}
  end
end)

This doesn’t work because each query, even if they have the same arguments/filters/etc, will have a different internal structure, so they wont match.

Any good way to make this work well?

I would use the action and input arguments as the cache key as opposed to the query itself.

But what if the user does something like this?

MyResource |> Ash.Query.for_read(:cached_read, some_args) |> Ash.Query.filter(some_other_filter) |> Ash.read!()

In this case only getting the args will not work since the cache will totally ignore that there are some custom filters for the action right?

True true, if you have that kind of case for these actions then yes you’d need to extract the relevant bits like filtering, sorting, limiting, offsetting etc

I wrote a custom per-function cache for a currency library in Python a long time ago, and what I found was that caching function calls in a general way is always kind of slow on the key computation step due to all the possible argument values. This is why Python’s built-in @lrucache decorator only works on hashables and doesn’t attempt to apply a stringification algorithm or even sort keyword arguments. Since this is a specific use-case in your application, it might be better to see if you can refactor the workflows to provide extra information that can be used as a simpler cache key for the enclosing function calls instead of trying to cache the query filtering itself.

Not sure if that’s feasible in your case, but if you haven’t explored it yet, it’s usually a lot simpler to cache function calls at the service layer than at the actual data access layer. Cheers!