Ash Framework: Official LLM development tooling and guidance

Hey folks! I’ve begun putting together some concrete, framework-wide tooling and guidance on the usage of LLMs in development. The goal here is not to push folks into using LLMs. The goal here is to make sure that those who want to use this technology have some resources to do so in the context of the tools that they want to use. This is a very important distinction. I know that there are folks out there experiencing some level of FOMO and/or worrying that they won’t have a good avenue to leverage technology that might (very important might) change the industry. My goal as a framework author and as someone who cares about this community, is to find ways to lift us all up together.

On to the concrete stuff, check out our new guide on working with LLMs.

A few of the bigger Ash packages have had usage-rules.md files, which you can (for now, the task itself may change later if we get greater adoption of the pattern) then include into your own project with mix ash_ai.gen.usage_rules <your_rules_file> package1 package2. The list so far is:

  • ash
  • ash_ai
  • ash_oban
  • ash_json_api
  • ash_graphql
  • ash_postgres
  • ash_phoenix
14 Likes

Actually I think this could be an opportunity for Ash to shine as its declarative model makes it perfect to generate high quality code using LLM that could provide even faster starting point to bring products live

2 Likes

I concur.

1 Like

I’m curious, as the author of a relatively niche framework, and therefore as someone who of course knows Ash as well as it can be known: are current models capable of generating “useful” Ash code in your view?

There is some concern (including from myself) that if model-assisted programming does take off there will be a “winner takes all” economy where anything niche (perhaps Elixir itself) is squeezed out.

One thing I found interesting (and concerning) was the recent Camel research published last month, which detailed a potential solution for prompt injection attacks. The work itself is a big deal because it’s the first seemingly working strategy to deal with such attacks, but the concerning part is the authors elected to create a DSL with Python syntax to model the permission system.

That is to say, the work had nothing to even do with Python and yet they still felt the need to create something which looks like Python just to improve the results.

On the other hand, it sounds like giving models clear instructions on how to use Elixir and Ash can be helpful. Empirically, can anyone share how much of a difference these things make?

1 Like

My personal and subjective experimation shows a night and day difference with:

  • good rules files
  • providing reference for whatever you’re making

As for the winner takes all, it’s possible, but only if models don’t get good enough to do more than just parrot popular tools, at which point who cares because its not going anywhere :laughing:.

1 Like

Models in their current form are always going to be better at parroting existing tools because that is how they’re designed. This is not to say they don’t exhibit a form of intelligence, but there is a substantial advantage for e.g. Python over e.g. Elixir because the model simply “knows” Python better.

That’s why I think the example I gave is so interesting: they were implementing a DSL, they could have chosen any syntax, so they went with Python knowing that the model would perform a bit better with it.

On the other hand, it could be (as discussed above) that something like Ash hits an abstraction sweet spot where, perhaps, models are able to produce more maintainable code due to the constraints given. Similar to the benefit for a human! It will be interesting to see if things play out this way.

If models were good enough to learn any tool with no training bias I think we would be in superintelligence territory, and at that point none of this would matter anyway. They’d be off maximizing paperclips and printing one-off ASICs for everything, or whatever :slight_smile:

1 Like

I guess by “good enough” what I actually mean is less so about them becoming super intelligent, and more about having established practices for getting LLMs to use modern best practices etc. For example, every thing they do is slightly out of date all of the time :grin:. So you get things like tidewave & context7, providing MCP tools to search docs etc, and tools like the one described here to gather rules and keep them up to date.

1 Like

One clear hint that we are headed down this path would be if this incentive structure inverts.

That is to say: library maintainers begin to avoid changing things so as not to “break” models. Developer experience no longer matters because the developers are no longer developing, so the focus turns to not stepping on the models’ toes.

Why do I need Ash 7 when Ash 6 works fine and my cluster is churning out code? It’s not like my LLMs have an opinion about the changes!

Essentially, Python and JS become an intermediate representation between English and Applications.

I’m not saying this will happen, and it sounds like a dystopic nightmare to me! But there are shades of it already happening, like that paper.

The concept is predicated on the idea that all code with the same ultimate effect is of the same level of quality. There are so many axis to consider, and mostly LLMs have the same difficulties that human beings do (and a bunch we don’t). Too much complexity, too much spaghetti, makes it harder for LLMs and humans both to change applications over time and for those applications to be understandable.

1 Like

I will place more weight on your experience here because I have not engaged in any LLM-assisted programming myself (I’m not convinced it’s useful at present, and I have privacy concerns).

But I am not convinced long-term that the failure modes of models and humans will be exactly the same, and I worry about excessive anthropomorphization here. Models can experience a codebase in a single forward pass, augmented by their training data. It could well end up being that abstractions which are useful to us humans (co-location of behavior, modularization, and so on) are less helpful to models. It could be that, if they get a bit smarter, they will be capable of maintaining that spaghetti - at least well enough to generate revenue.

After all, as we are all well aware, there are many companies making a great deal of money selling spaghetti even now :slight_smile:

Similarly, a self-driving car has little use for a stop sign as it can see in every direction at once. Our heads are on a swivel.

That’s fair, and long term things might change, but by my estimation we are at an absolute minimum multiple years away from LLMs being able to work on code for a serious project absent supervision and human code reviews. So code still has to work for humans too.

1 Like

My first post here, I’m 56 years old. I have been doing software development for thirty something years, last fifteen years I have been in management, I am taking a sabbatical after 6 years as VP of Engineering at a large startup. A long time dream of mine was to program in Elixir, finally I am moving ahead with it, loving it so far.

When I started learning Elixir a few weeks ago, with printed books, I knew this language wasn’t going to have a first class support from LLMs for coding, like Typescript or Python, it is just a matter of training data and curation. But I guessed that eventually that would change, or I may find a way to supplement the training limitations through RAG or any other means.

I spent some time crafting guides, a lot of study notes, prompting and manual review, for different topics like testing, documenting and so on, like cheatsheets to hint agents to use certain patterns, usually with good results. But brute forcing documentation through “deep research” is slow.

And Ash has been particularly frustrating, the book helped a lot by the way, but the docs are not there yet and the number of options for anything you want to achieve are almost infinite. But I liked a lot the code once I managed to produce it. And of course, as everything in life, it is a compromise.

It is very clear for me the potential of having your app as a data structure in this times, a treasure trove for LLM. I’m eager to try translating this into MCPland. But on the other side I was thinking that, well, maybe in this time of easy to generate code, maybe disposable code it would be easier to go on with what LLMs already knew, “regular Elixir”.

I know a lot of people will be enabled by support artifacts like this to just let an llm/agent generate the code without caring, o rlearning, and that’s ok; but on the other side this is going to be an invaluable tool for learning and gaining adoption. And I wanted to say thanks for this initiative, it was much needed.

And I know, here is an incredible community, I didn’t remember I missed this from my old Ruby and Ruby On Rails times when everything was starting. Will be asking more here, just getting used to it, it is a bit overwhelming.

7 Likes

I totally agree, I think the usage rules files probably serve as some of the best quick reference we have now even for humans. Should probably put these into the hexdocs even :sweat_smile:

As for whether we’ve entered the world of “disposable code”, maybe but I think not :slight_smile: If you can have a recipe for building applications that stand the test of time and can be done at the speed of an LLM, then you’ll really be on top.

Early experimentation with a combined rules file leads to massive improvements in output of LLMs, and that is just with the ash ones not one for elixir, LiveView, etc.

Plus Claude 4 just came out which appears to have finally indexed some Ash stuff :slight_smile:

4 Likes

Ha ha - I just read the usage rules for Ash and thought “darn - should have read this a couple of years back!”

1 Like

“Should probably put these into the hexdocs even” => Yes, yes, and yes!

Indeed Claude 4 is looking good, it nailed setting up a set of nested forms for Dash Admin, now I get the syntaxthat I see the thing built.