Hey everyone!
I’m excited to share ReqLLM - a new approach to LLM interactions in Elixir that I’ve been working on. After building agent systems with various LLM clients, I kept running into the same frustrations: they either lacked Elixir’s composability principles or didn’t integrate well with existing HTTP pipelines.
Why Another LLM Client?
While building out Jido features, I needed a lower-level API for making LLM requests. ReqLLM is built on Req, with each Provider built as a Req plugin that handles provider-specific wire formats. It’s designed to compose naturally with your existing Req-based applications.
Core Architecture
Plugin-Based Providers: Each LLM provider (Anthropic, OpenAI, Google, etc.) is a composable Req plugin.
Typed Data Structures: Every interaction uses proper structs (Context, Message, StreamChunk, Tool, ContentPart) that implement Jason.Encoder - no more wrestling with nested maps.
Two Client Layers: High-level helpers for quick wins (generate_text/3, stream_text/3, generate_object/4, etc) plus low-level Req plugin access when you need full control.
Built-in Observability: Usage and cost tracking on every response based on metadata sync’d from https://models.dev
Quick Example
# Simple approach
ReqLLM.put_key(:anthropic_api_key, "sk-ant-...")
{:ok, text} = ReqLLM.generate_text!("anthropic:claude-3-sonnet", "Hello")
# Tool calling with structured responses
weather_tool = ReqLLM.tool(
name: "get_weather",
description: "Get weather for a location",
parameter_schema: [location: [type: :string, required: true]],
callback: fn args -> {:ok, "Sunny, 72°F"} end
)
{:ok, response} = ReqLLM.generate_text(
"anthropic:claude-3-sonnet",
"What's the weather in Paris?",
tools: [weather_tool]
)
Current Status
ReqLLM 1.0-rc is available on Hex with 45+ providers and 665+ models (auto-synced from models.dev). I’m using it in production for Jido agent systems and it’s been solid. Planning to add Ollama/LocalAI support and enhanced streaming soon.
Resources
- Hex Package: req_llm | Hex
- Documentation: ReqLLM v1.2.0 — Documentation
- Getting Started Guide: Getting Started — ReqLLM v1.2.0
- GitHub:
I’d love to hear your thoughts and see what you build with it! The plugin architecture makes it pretty straightforward to add new providers if there’s one you need.






















