I just wanted to say that I recently got back into Elixir after many years of barely touching it (not really by choice). Funnily enough, the reason is Claude Code.
In the past three months I’ve built an app to help me track when things in my fridge are about to go bad, another one to help me practice speaking slower, and most recently an app to keep all of my agents in sync across platforms and machines.
I’ve noticed that when I build projects with Python + React, Claude Code isn’t nearly as good as when I’m working with Elixir/Phoenix. I think a big part of it is how little code you actually need in Elixir to make something useful. That means Claude has less context to deal with and tends to produce better results.
On top of that, Elixir is a very simple language to understand, which makes debugging much easier.
FWIW, I took a break from 20x Claude and am using only the $200 OpenAI Pro right now.
This came up because I found that Claude, even on Opus-4.6 Max, makes mistakes and produces incorrect results compared to GPT-5.4 X-High – both research/docs and implementation/debugging.
Started on GPT-5 via codex about 5 months ago. It took a good month of experimentation to learn how to prompt it correctly.
It’s not done yet, but I have a daemon on my CLI that watches for file changes in shared files (agent/team definitions and shared knowledge), and a relay that uses Phoenix Channels. https://github.com/teamrc-ai/teamrc
Definitely. IMO we should try to share more. Problem however is: LLMs boost our productivity and we often don’t want to stop and fine-tune the prompting rules. Feels like we’re wasting time.
Next personal character development quest unlocked, then.