Has anyone worked on or considered fine-tuning a model to have specific knowledge about Elixir, Erlang and OTP patterns? Would be pretty cool to have an Elixir expert LLM for code gen and architecture discussions.
There’s a few reasons to do this: large models are pretty good at writing Elixir, but they struggle with the design patterns of OTP and Elixir, and they can be expensive depending on your usage. Fine-tuning and quantizing a model could make a very small model (<500Mb) which would make it usable locally, reducing the key bottleneck (API latency) and keeping your chats on-device. You could imagine a future where downloading an Elixir coding assistant is like downloading a rather big dependency.
Just wondering if someone’s already tackled this before I give it a shot ![]()






















