Hey everyone - I’ve been building with Elixir on and off for over 8 years, but somehow have never posted on the actual Elixir forum. Time to fix that…
Also, I’d love to share with you Omni - a library for working with LLM APIs across multiple providers through a unified interface. Anthropic, OpenAI, Google Gemini, Ollama, OpenRouter, and OpenCode Zen are supported out of the box.
# Resolve model
{:ok, model} = Omni.get_model(:anthropic, "claude-sonnet-4-6")
# Simple text generation
{:ok, response} = Omni.generate_text(model, "Hello!")
# Stream with composable callbacks
{:ok, stream} = Omni.stream_text(model, "Tell me a story")
{:ok, response} =
stream
|> Omni.StreamingResponse.on(:text_delta, &IO.write(&1.delta))
|> Omni.StreamingResponse.complete()
Tool use and structured outputs are supported. Pass tools in the context and Omni handles the execution loop automatically - calling the model, executing tool handlers, feeding results back, and repeating until the model is done. Structured output uses JSON Schema constraints with validation:
# Tool use - Omni manages the tool execution loop
{:ok, response} = Omni.generate_text(
model,
Omni.context(
messages: [Omni.message(role: :user, content: "What's the weather in London?")],
tools: [weather_tool]
)
)
# Structured output
alias Omni.Schema
{:ok, response} = Omni.generate_text(
model,
"Extract the contact details: Reach me at jane@example.com or call 01234 567890",
output: Schema.object(%{
email: Schema.string(description: "Email address"),
phone: Schema.string(description: "Phone number")
}, required: [:email, :phone])
)
Omni also offers a lightweight take on agents. Omni.Agent is a GenServer that manages its own conversation context and tool execution, and communicates with callers via standard process messages. You control behaviour through lifecycle callbacks. It’s a building block, not a framework - what you build on top (planning, memory, multi-agent orchestration) is your concern.
I know req_llm covers similar ground, which - slightly annoyingly - I didn’t realise existed until I was 90% of the way done with Omni
. On the surface they have quite similar APIs, and both use Req, but how they handle implementing providers is a little different. Omni separates providers (the endpoint, configuration and auth) and dialects (wire format translation). The dialect does the heavy lifting, and as most providers share a dialect, adding a new provider is typically a small, mostly-declarative module. Everything is streaming-first - generate_text is built on top of stream_text, so there’s one code path through each dialect.
Anyway, please check it out. Let me know if you have any questions.






















