Openai_ex - OpenAI API client library

This functionality seems helpful and worth copying.

Providing an abstraction layer for different LLM’s in Elixir, is interesting, but it’s not the first thing I would do.

Partly, it’s a chicken and egg problem. LangChain can work because all LLM providers already have API bindings in python and node.js. Someone would have to provide those for elixir. Secondly, there will likely be only 2-3 “winners” in the LLM space, and it will be more important to support them comprehensively, rather than supporting every possible LLM partially.

I’d be interested in implementing an application level project like AutoGPT in Elixir, starting with OpenAI, adding the “higher-level” LangChain-like functionality, if it proves necessary. Does that sound interesting?


I have added the Fine Tunes api and bumped the version to 0.1.6. All the current (non-deprecated) end-points are now available.

Yup, sounds pretty good!

1 Like

So, indeed LangChain provides all the needed abstractions, but the question remains is there out there another LLM that could be successfully used, in a project like AutoGPT to replace OpenAI? I have tried running GPT4All, also GPT-J-6b, and LLama-cpp and according to my evaluations, none of them performs any good when you give it a complicated prompts, like, asking to choose a tool, or respond in a given format. Granted, I didn’t spend enough time with these, but my intiial impressions were that there is OpenAI, that works, and scaffold for all the other models that may or may not work in the future.

1 Like

That would be about par for the course. I would expect there to eventually be 1 possibly 2 other contenders. Most likely Bard.

Since this is a Livebook focused library, I want to provide “starter” sample notebooks for people to build their own Livebook UIs. These will be sample livebooks that can also be deployed as apps (using the new Livebook deploy functionality).

I’m beginning with Continuation Kino App — openai_ex v0.1.7, a simple Kino UI for the open ai completion API. It’s very basic, but I’m open to suggestions for improvements and refinements.

1 Like

@hubertlepicki it might also be worth comparing some of the new supported models on huggingface. although those are full inference nets, rather than just an API interface to one, I suppose they could be integrated into an autogpt like framework.

I’ll try to work up some basic experiments when i get a chance.

This video gave me a decent introduction to LangChain Langchain Explained in 13 Minutes | QuickStart Tutorial for Beginners - YouTube

1 Like

In case it’s helpful to anyone, I contributed a streaming ChatGPT example to wojtekmach/mix_install_examples here.


Thank you @neilberkman . That’s very helpful. I will try to incorporate that into openai_ex.

1 Like

Well that turned out to be useful quicker than anticipated! I was getting warning messages from Tesla, and I thought it might be worth using Finch instead.

I’ve basically replaced Tesla with Finch. The new version is 0.1.8.

I had to add a new function to multipart. Until that’s merged, I have to depend on my github clone of the upstream.

Unfortunately, doesn’t like non-hex dependencies, so I can’t upload this version over there.

multipart 0.4.0 is out with my PR merged. so i have now uploaded openai_ex v0.1.8 to

I have added Image Generation Kino App — openai_ex v0.5.1, a starter Kino UI for the Image Generation endpoint.


Just published. v0.1.9 - File uploads from local paths are now streamed.


I was pointed to this (‘free whilst in beta’) course: ChatGPT Prompt Engineering for Developers

blurb: “The course teaches prompt engineering best practices for application development, different ways to use LLMs, and how to iterate on prompts using the OpenAI API. It covers how to write effective prompts, how to systematically engineer good prompts, and how to build a custom chatbot. The course is beginner-friendly, but it is also suitable for advanced machine learning engineers.”

It is run by Andrew Ng, from Coursera AI fame, and guest from OpenAI.

The course is in python and jupyter (unfortunately not Elixir and Livebook ) but the main thrust is about showing how to organise the process of integrating into any system - so not python or jupyter specific.

1 Like

@neilberkman I’m looking into supporting streaming responses from the chat continuation endpoint. I notice that your solution works by connecting a genserver to eventsource_ex, which appears to use HttpPoison under the hood.

I’ve already moved from Tesla to Finch for connecting to the endpoints. Do you have any pointers to making this work with Finch? It seems to me that I would need to pass in a custom stream(acc) function to which sent messages to a genserver/agent entity every time it was called.

I have an open PR to Req to better enable streaming and made a Livebook example of how it would work with Finch here. You should be able to adapt the FinchStream module to your needs :slight_smile:


Fantastic @zachallaun . Thank you!!! I got it working in a couple of hours (had a little trouble parsing the chunks). Now I just need an example for the user guide, and it should be good to release.

Glad I could help! Maybe when/if that PR lands, you could switch back to using Req and have gone full circle from Req → Tesla → Finch → Req :smile:

1 Like

Indeed! :joy: Although I do need multipart support as well. Still, it’s nice that there are so many more-or-less usable options.