The repo is at GitHub - restlessronin/openai_ex: Community maintained OpenAI API Elixir client for Livebook.
Docs are at OpenaiEx User Guide — openai_ex v0.4.2.
The main user guide is a livebook, so you should be able to run everything without any setup. Livebook allowed for a really good demo of the API, especially for the image API, where you can see what the api calls created!
At this point (as of Nov 11, 2023), all features are supported, including the Assistants API Beta, DALL-E-3, Text-To-Speech, the tools support in chat completions, and the streaming version of the completion and chat completion endpoints.
There are some differences compared to other elixir openai wrappers. First, I tried to mirror the naming/structure of the official python api. In particular, content that is already in memory can be uploaded as part of a request. Second, I was developing for a livebook use-case, so I don’t have any config, only environment variables. Third, I’m not sure how many of the other clients support streaming and function calls.
Documentation is still a work in progress. In addition to the user guide, there are also Livebook examples for
The library is developed in a livebook docker image running in a VS code dev container.
Please try it out and let me know what you think. Happy to receive suggestions (and PRs) for improvement, as well as illustrative sample notebooks.
FYI, as of 3 days ago, I’ve been using it with some light experiments xD
Wonderful. You must have found it almost at the same time I first published the package Lots of improvements / changes in the past few days (including switching from
Hope you’re finding it useful. Let me know if you have any comments / suggestions.
Would you give us the rationale as to why? I am interested to read it.
No deep reason. I liked the Req design a little better, but it doesn’t support multi part form uploads. Tesla does. And some of the API endpoints require it.
Well, that’s a pretty good reason though. Thanks.
FYR, there’s a stand-alone multipart building library that can be used with HTTP clients that don’t have it built in, at least one, I think I used a different one back in the day but can’t find it now:
Thanks @hubertlepicki. It’s good to know, in case there’s a future use-case where people want to plug in a specific client.
I’ve released v0.1.3 with refinements to the Audio API and included an Audio example in the user guide.
Would be nice to have additional helpers like LangChain!
Happy to help out if you considering working on it!
Yep, and the end result may be better than LangChain as Elixir has great building blocks to things like that.
Hey, in case anyone need OpenAI library that supports streaming I’m working on one (I use it internally and for production) marinac-dev/openai
Keep in mind it’s not full of features, I developed only the one I use right now but full API support coming in next few days. It’s nice that we have a boom in OpenAi libs
Additional helpers, as in client apis for other language models?
I don’t really know much about LangChain. On first blush it looks interesting, let me take a closer look. Thanks for the pointer.
Might be interesting to try to build one of their demo applications entirely in Livebook, to see what’s what.
I just added the File endpoint and bumped the version to 0.1.4.
Please note that the API is changing a little from version to version, so if you see weird behaviour check the user guide for the latest version for the right sample code.
Hopefully once all the endpoints are added the API will be more stable. 2 endpoints left to go. Fine Tuning and Moderations.
Would be even nicer to consolidate
It’s basically a library that helps you be a better prompt engineer.
Jokes aside, due to the limitation of tokens, context, etc. LangChain is pretty helpful when it can chunk large texts and do a map reduce kind of aggregation.
That’s one use case, and there are more.
how does it compare to autoGPT?
added the Moderation endpoint
bumped version to 0.1.5
Not sure if that’s the right comparison here.
AutoGPT is an attempt of building an autonomous agent.
LangChain is more like a tool for people who want to use LLMs.
More like a nice helper library / wrapper to make interacting with LLMs easier instead of you having to go develop all the functions yourself to get a better response from models.