Hello, I just published openai_responses, a very simple wrapper around OpenAI’s new Responses API. From what I understand, this is what they want for developers to use going forward, and the old Chat Completions API is now considered “legacy”.
Granted, it’s v0.1.0, so bugs and rough edges are to be expected at this point.
Here is a X thread with some usage examples. Please let me know what you think!
Why did I create yet another Elixir library for working with LLMs?
I have to confess: I initially developed OpenAI.Responses just to explore the (then newly released) Responses API and experiment with Elixir code generation using LLM agents (Claude Code with Sonnet 3.7 and Cursor + Sonnet 3.7). Since then, I’ve refined it, releasing version 0.4.0 with improved API wrapping.
Ecosystem fragmentation is a known issue, especially in smaller communities like Elixir. So why create another library instead of using existing ones? Two reasons:
- Focus on Cutting-Edge APIs: I target OpenAI’s latest, advanced API, prioritizing innovation over supporting a broad range of LLM providers.
- Minimalist SDK Approach: Inspired by Dashbit’s SDK philosophy, I aim for minimal abstraction, avoiding heavy frameworks.
No existing solution aligns with these goals, justifying a new library.
Existing Solutions
Four notable libraries exist for LLM integration in Elixir (GitHub stars indicate relative popularity):
-
LangChain (897 github stars): The most popular, supporting numerous providers with a unified abstraction. Example:
LLMChain.add_message(Message.new_user!("Where is the hairbrush located?"))
It smooths out provider differences but prioritizes broad compatibility over advanced features. Responses API support is in progress.
-
Instructor (720 github stars): Unique for enabling structured outputs via Ecto schemas. Revolutionary 1.5 years ago, it’s less critical now as OpenAI and Gemini natively support structured outputs with JSON schema compliance. Although Anthropic’s support of structured output is inconsistent, and smaller providers vary, so Instructor still has its use cases.
-
OpenAI.Ex (345 github stars): No longer actively maintained (last commit ~11 months ago).
-
OpenAI_Ex (178 github stars): Actively developed with Responses API support, but primarily focused on the older Chat Completions API.
Why OpenAI.Responses?
For my startup, finding product-market fit is critical. Success won’t hinge on using the cheapest or fastest LLM, but on leveraging a reliable, feature-rich provider like OpenAI. Targeting OpenAI exclusively allows access to cutting-edge features (e.g., image generation, web search, script execution) without worrying about cross-provider portability.
Portability across LLM providers is impractical anyways! Even switching models within OpenAI (e.g., gpt-4o to gpt-4.1) can alter behavior, and different providers require unique optimizations and careful prompt refining.
I also prioritize minimal overhead. Unlike LangChain’s Message.new_user!
, I use simple structures like %{role: :user, content: "message"}
, which are cleaner and more flexible (e.g., supporting dynamic inputs or YAML). OpenAI.Responses
keeps abstraction to a minimum; users can inspect response.body
for a well-documented structure. For complex use cases, OpenAI’s documentation is essential anyways, and I avoid adding unnecessary layers.
Finally, by focusing on a single provider, I can realistically support features such as automatic cost calculation; it would be simply impractical to keep in sync with pricing changes across the full ecosystem. I can also experiment with API design, such as chaining of create/2
calls to support the conversation state, a cleaner interface for JSON schema definition, or automatic resolution of tool calls.
Conclusion
By focusing on OpenAI’s latest API and minimizing overhead, OpenAI.Responses offers a lean, modern solution for Elixir developers. I welcome feedback—please share your thoughts!