I’ve aggregated multiple calls to different tables into a single query with multiple joins where the is a common value between them (ID). So I have something like:
Table1
|> from(as: :table_1)
|> join(:left, [t], t2 in Table2, on: t.id == t2.id, as: :table_2)
...
|> where([t], i.id == ^id)
|> select([table_1: table_1, ..., table_9: table_9], %{table_1: table_1, ... table_9: table_9})
|> Repo.one()
This works fine, it will have a different structure per field and request all the information I need. Since this can grow in the future, and I decided to request more or fewer tables depending on the use case, I want to make this all dynamically.
I have made this almost work except for the select
function, which I cannot find a way to make the same behaviour with select_merge
. I’ve changed that to the following structure:
defp apply_table_select(query, tables) do
query = select(query, [i], %{table_1: i})
Enum.reduce(tables, query, fn table, query ->
select_merge(query, [{^table, m}], %{^table => m})
end)
end
But this doesn’t work. I need to add ^
because otherwise (and makes sense since I’m referring to a variable), I have a compile-time error.
** (DBConnection.EncodeError) Postgrex expected a binary, got :table_2. Please make sure the value you are passing matches the definition in your table or in your query or convert the value accordingly.
The issue appears to be related how I added the field to select_merge
:
#Ecto.Query<from i0 ..., select: %{table_1: i0, table_2: t1}>
vs
#Ecto.Query<from i0 ..., select: merge(%{table_1: i0}, %{^:table_2 => t1})>
Not sure if this is a limitation from Ecto or I’m doing something wrong.
EDIT: Just found this topic, basically has the same issue.