Openai_ex - OpenAI API client library

The repo is at GitHub - restlessronin/openai_ex: Community maintained OpenAI API Elixir client for Livebook.

Docs are at OpenaiEx User Guide — openai_ex v0.6.4.

The main user guide is a livebook, so you should be able to run everything without any setup.

Portions of this project were developed with assistance from ChatGPT 3.5 and 4. However, every line of code is human-curated (by me :innocent:)

All API endpoints and features (as of May 1, 2024) are supported, including the Assistants API Beta 2 with Run streaming, DALL-E-3, Text-To-Speech, the tools support in chat completions, and the streaming version of the chat completion endpoint. Streaming request cancellation is also supported.

Configuration of Finch pools and API base url are supported.

There are some differences compared to other elixir openai wrappers.

  • I tried to faithfully mirror the naming/structure of the official python api. For example, content that is already in memory can be uploaded as part of a request, it doesn’t have to be read from a file at a local path.

  • I was developing for a livebook use-case, so I don’t have any config, only environment variables.

  • Streaming API versions, with request cancellation, are supported.

  • The underlying transport is finch, rather than httpoison

  • 3rd Party (including local) LLMs with an OpenAI proxy, as well as the Azure OpenAI API, are considered legitimate use cases.

Documentation is still a work in progress. In addition to the user guide, there are also Livebook examples for

The library is developed in a livebook docker image running in a VS code dev container.

Please try it out and let me know what you think. Happy to receive suggestions (and PRs) for improvement, as well as illustrative sample notebooks.


FYI, as of 3 days ago, I’ve been using it with some light experiments xD


Wonderful. You must have found it almost at the same time I first published the package :slight_smile: Lots of improvements / changes in the past few days (including switching from Req to Tesla)

Hope you’re finding it useful. Let me know if you have any comments / suggestions.

1 Like

Would you give us the rationale as to why? I am interested to read it.


No deep reason. I liked the Req design a little better, but it doesn’t support multi part form uploads. Tesla does. And some of the API endpoints require it.


Well, that’s a pretty good reason though. Thanks.


FYR, there’s a stand-alone multipart building library that can be used with HTTP clients that don’t have it built in, at least one, I think I used a different one back in the day but can’t find it now:


Thanks @hubertlepicki. It’s good to know, in case there’s a future use-case where people want to plug in a specific client.

I’ve released v0.1.3 with refinements to the Audio API and included an Audio example in the user guide.

Would be nice to have additional helpers like LangChain!

Happy to help out if you considering working on it!


Yep, and the end result may be better than LangChain as Elixir has great building blocks to things like that.

1 Like

Hey, in case anyone need OpenAI library that supports streaming I’m working on one (I use it internally and for production) marinac-dev/openai
Keep in mind it’s not full of features, I developed only the one I use right now but full API support coming in next few days. It’s nice that we have a boom in OpenAi libs :smiley:


Additional helpers, as in client apis for other language models?

I don’t really know much about LangChain. On first blush it looks interesting, let me take a closer look. Thanks for the pointer.

Might be interesting to try to build one of their demo applications entirely in Livebook, to see what’s what.

I just added the File endpoint and bumped the version to 0.1.4.

Please note that the API is changing a little from version to version, so if you see weird behaviour check the user guide for the latest version for the right sample code.

Hopefully once all the endpoints are added the API will be more stable. 2 endpoints left to go. Fine Tuning and Moderations.

1 Like

Would be even nicer to consolidate :smiley:

It’s basically a library that helps you be a better prompt engineer.

Jokes aside, due to the limitation of tokens, context, etc. LangChain is pretty helpful when it can chunk large texts and do a map reduce kind of aggregation.
That’s one use case, and there are more.

1 Like

how does it compare to autoGPT?

added the Moderation endpoint

bumped version to 0.1.5

1 Like

Not sure if that’s the right comparison here.

AutoGPT is an attempt of building an autonomous agent.

LangChain is more like a tool for people who want to use LLMs.

More like a nice helper library / wrapper to make interacting with LLMs easier instead of you having to go develop all the functions yourself to get a better response from models.

1 Like