This is an open discussion to anyone using AI to help them work with Elixir in production-ready apps.
I work on an Elixir app at work that’s mostly backend with no customer facing UI parts, but I haven’t found good ways to use AI in my workflow beyond the obvious things. I’ve talked with my coworkers there seems to a pattern of not trusting generated code that’s larger than a single function, generated tests are inconsistent unless they’re unit tests, etc.. Ultimately your ass is on the line when you ship it so you want to trust what you’ve written.
On a positive note, asking AI to explain something works nicely . But beyond talking to an LLM and having it clarify or refine things, I’m left wondering if I’m misunderstanding the hype.
There are lots of tools out there (Cursor IDE, Cursor PR review bot, Copilot, Claude, <%= insert your favorite LLM here %>
), and apparently LLM’s work quite well with Elixir. The auto-complete from these tools can be really nice (but to me, it’s just nice).
We even have Elixir specific tools like:
Some of these are great if you’re starting a new project, if you want to prove an idea at the surface level, if you’re following Phoenix best practices to make a LiveView CRUD app, etc.. But if you’re working on an existing complicated code base, what tools/workflows are you finding make you more productive?
I’m genuinely looking interested in using AI in a net-positive way (for me and for the company I work for) and would love to hear what works for you beyond creating a new phoenix app.