I’ve been experimenting with a small pattern for using AI in apps.
Instead of replacing the UI with a chat, I tried using it to help users build filters.
So a user can type something like: “customers who spent more than $500 in the last 3 months and haven’t ordered recently”
and the app turns that into a struct and runs a normal query (Ecto in this case).
The important part for me was keeping things predictable:
- the model doesn’t generate queries
- everything goes through a schema
- you can still edit the filters manually
Wrote a short post about it:
4 Likes
Cool! I thought about something similar a while ago: instead of creating filters you could give it a hook point where it generates a function that receives a collection of items and must return the HTML + CSS based on user instructions. Like “show items as grid/list”.
Definitely more complex and not sure if it’s useful at all but would be a fun experiment
We are in the process of beta testing a feature more or less identical to this, although we needed to support fairly complex filters across multiple tables and in some cases with multiple params. We already had a conventional UI built out, and the backend to support the filter logic.
One thing we did was use the existing function and module docs from the filter logic using Code.fetch_docs to build the LLM context. And then we added a bunch more detail and examples to function docs until the LLM was able to reliably return useful data. The filter logic also already had casting/validation so we could just use that. Took some fiddling, but the entire feature only ended up being a thin wrapper around ReqLLM, a context generator that pulled in all the docs, and an endpoint that called out to those and applied the validation and returned the result. And as a bonus we had more general purpose docs than when we started. Turns out a lot of the instructions that help Claude also help human devs.
I maintain a small utility for dumping docs and as a next step I am going to try to use that to extract the “context baking” logic from runtime code.
1 Like
Definitely a fun experiment! The challenge of validating the response sounds complicated though 
1 Like
So cool that you are releasing something similar in production!
Using Code.fetch_docs to build the LLM context is really smart!
Would love to see some examples of that reqLLM layer if possible!
Turns out a lot of the instructions that help Claude also help human devs.
Agreed.