Hi everyone, at Doofinder we have been building llm_composer for some new apps, and we thought it could be useful to share it with the community.
llm_composer is an Elixir library that simplifies working with large language models (LLMs) like OpenAI’s GPT, OpenRouter, Ollama, AWS Bedrock, and Google (Gemini).
It provides a streamlined way to build and execute LLM-based applications or chatbots, with features such as:
Multi-provider support (OpenAI, OpenRouter, Ollama, Bedrock, Google Gemini/Vertex AI).
System prompts and message history management.
Streaming responses.
Function calls with auto-execution.
Structured outputs with JSON schema validation.
Built-in cost tracking (currently for OpenRouter).
Easy extensibility for custom use cases.
A key feature is the provider router that handles failover automatically.
It will use one provider until it fails, then fall back to the next provider in the list, applying an exponential backoff strategy. This makes it resilient in production environments where provider APIs can become temporarily unavailable.
Under the hood, llm_composer uses Tesla as the HTTP client.
For production setups, especially when using streaming, it is recommended to run it with Finch for optimal performance.
This update focuses on a key licensing change for the project.
License change
From v0.12.2, llm_composer is now released under the MIT license, replacing the previous GPL-3.0.
Why
The project originally used GPL-3 without fully considering its implications.
After using depscheck to our dependency review process (internal stuffs), we realized that GPL-3 could limit adoption in some environments or organizations.
To avoid any restrictions and ensure the library remains freely usable in all kinds of projects—open-source or commercial—we’ve switched to the more permissive MIT license.
Other updates
Minor documentation and metadata changes reflecting the new license.
No functional or API modifications; upgrading from any 0.12.x version is seamless.
Quick update since the last post (v0.12.2 / MIT switch) — quite a bit has landed since then.
What’s new in 0.16.0
New provider:LlmComposer.Providers.OpenAIResponses — calls OpenAI’s /responses API with support for structured outputs and reasoning (provider/model-specific params passed via request_params), normalized into the usual LlmResponse shape. Since it targets the /responses API spec, it also works with compatible providers like x.ai (xAI/Grok), OpenRouter’s responses endpoint, and others.
Typed streaming chunks: new LlmComposer.StreamChunk struct + provider-specific parsing turns raw stream events into typed values (:text_delta, :tool_call_delta, :done, …).
LlmComposer.FunctionCallExtractors: centralizes function call extraction logic per provider.
Provider-aware parse_stream_response: now comes in /2 and /3 arities and returns %StreamChunk{} values instead of raw decoded maps.
Internals refactored to protocol-based adapters (LlmComposer.ProviderResponse + LlmComposer.ProviderStreamChunk) for cleaner normalization across providers.
Notable releases since 0.12.2
0.13.0 (Dec 2025): function-call workflow is now manual/explicit via FunctionExecutor + FunctionCallHelpers — breaking change, see changelog.
0.13.1 (Jan 2026): custom HTTP headers support for OpenRouter.
0.14.0 (Feb 4): configurable retry/backoff for provider requests.
0.14.1 (Feb 9): deep merge fix for nested request_params (e.g. Google’s generationConfig).
0.14.2 (Feb 10):LlmResponse.new/3 now returns {:error, …} instead of raising on unknown provider formats.
0.15.0 (Feb 17): configurable :json_engine (defaults to JSON, falls back to Jason); Google provider now preserves additionalProperties in response schemas.