Least/Greatest in Fragment with Dynamic Fields

I am trying to wrap my head around how to create an Ecto Fragment that performs a Postgres LEAST/GREATEST query against an unknown number of fields.

For example, a client wants to compare any number of fields in my schema: could be one field, could be ten fields. The issue is that I have no way of knowing ahead of time how many fields will be part of the query.

In SQL terms, it’s something like this:

SELECT 
  LEAST(field_1, field_2, ..., field_n)
FROM
  some_table
;

I’ve tried a few approaches. To me it seems like it should be something close to:

fields = ["some_field", "another_field"]
from m in Model, 
select: %{least: fragment("least(?)", ^fields)}
|> Repo.all()
2 Likes

The way you do dynamic fields in Ecto is with the field macro - field(m, ^field_name) where field_name is an atom. I think you really are asking for varargs in fragments though, which isn’t supported directly but I think you could do this with a macro as long as the number of fields is fixed at compile time (in the call site) - you’d have to dynamically build the fragment invocation in your macro based on the list size passed to it.

2 Likes

The problem is that I don’t know how many arguments there will be at compile time. Most of the time it will be one or two, but the ceiling for how many has no real bounds.

I had thought of making macros to handle up to a limit of fields, like:

defmacro(a)
defmacro(a, b)
defmacro(a, b, c)
defmacro(a, b, c, ..., whatever)

But to me that’s not a great approach and there might be something better I could do.

How would you call those different macro arities if you didn’t know the number of arguments at the call site? Are you sure the call site doesn’t know the number of args ?

1 Like

The project I’m working on allows users to upload CSVs to a Postgres database, and the project provides an API and webapp for other people to browse through it (it’s an open data project). I accomplish this by storing the metadata about the data set (name, source, field/column definitions) in the database and then I create their schemas in memory and cache them in a GenServer. With the metaprogrammed schemas in hand, I can do normal Ecto querying against the data set tables and provide the data via Phoenix (web and API).

So, for example, one data set has something like 20 fields in the data set that can be parsed and used to filter/order the data; another more narrow data set has 5; et cetera. So while the call site will know how many arguments it’s calling with when a user composes a query (either via the app or query string on the API call), I have no way of knowing ahead of time while writing the code.

My solution at the moment is using a query template, parsing it with EEx and firing it through Repo.query.

1 Like

You are compiling your query template at runtime with EEx? That sounds like you would in fact know the number of args at compile time - since you compile dynamically - but I doubt another macro here would introduce any real benefit over just building your call to fragment dynamically.

1 Like