I have published v0.8.3 to bring back the completions API which is still being used by 3rd party API providers.
Shoutout to github user @kalocide for bringing the issue to my attention and for the PR.
I have published v0.8.3 to bring back the completions API which is still being used by 3rd party API providers.
Shoutout to github user @kalocide for bringing the issue to my attention and for the PR.
Shoutout to the author @restlessronin, this is currently the best maintained OpenAI client library for Elixir.
Hi! Thanks for putting together an awesome library @restlessronin! Any way I request realtime audio API (Aka advanced voice) access?
Thanks!
@cgraham thank you for the kind words.
sure. go ahead and open an issue for this on the repo.
Great. Just submitted!
Saw your issue in restlessronin (My comment)
Here is a sample implementation in Elixir/Phoenix for Realtime API: GitHub - francoabaroa/openai_realtime_ex: OpenAI Realtime Elixir Demo
Still a WIP
I have published v0.8.4 to
Shoutout to Github User @kofron for the portkey PR, @adammokan for the project id PR and @daniellionel01 for filing the issue that revealed the :nxdomain problem.
Thanks! Just tried and definitely a great first start! I couldn’t get it to listen to me (it just kept saying “hello how can i help you” but I figured something weird on my end.
As for my update I ended up taking the JS/python version of the realtime console exmaple from openai and then used Claude to rewrite it into Elixir including a relay server to support both web and twilio. Took some noodling but it surprisingly works! And I grabbed the VAD code too so it does voice activation instead of walkie talkie.
Code-wise, I am not going to say it was the best code (it is not!) but if you know Elixir it gives you a huge head start and you can refactor it pretty easily.
Makes me wonder if we (aka the elixir community) should port over a bunch of example apps and/or simple libraries from JS/Python to build up our libraries. Obviously performance is likely suboptimal and the AI may not be the best about using BEAM let alone producing great code but simple algorithmic or example libraries may work and start minimizing one of the bigger friction points for Elixir?
On the flipside, filling up github with junk Elixir code may be bad. I do feel there is some thing that can be leveraged though. Maybe we just need to train an awesome Elixir porting LLM
Also will add that currently realtime api is ridiculous >$1.00 a minute so word of caution with anyone who is trying it!
I think this may be one of the ways in which “missing” libraries can be added to many “niche” run-times. ATM (and this could easily change within a year), AI code is definitely sub-par, but it can serve as a starting point for an experienced dev to polish into something that is usable. It will definitely reduce the effort to something that could enable a richer eco-system of libraries.
Nothing weird on your end - I havent updated it so it’s still stuck in the first hello - my apologies!
Another interesting example you can check out is here: membrane_demo/livebooks/openai_realtime_with_membrane_webrtc at master · membraneframework/membrane_demo · GitHub
The docs on how to stream chat completions make it so clear what’s going on. I was able to implement streaming chat completions in my LiveView in about 15 minutes. Thank you for this excellent work!
I have published v0.8.5 which adds the store parameter
Shoutout to @MMore for the PR.
I have published v0.8.6 with support for developer messages and reasoning models.
Thanks for an excellent library. We cooked our own, not-so-great implementation and would love to switch to this library. A couple of questions
Since many of OpenAI’s api’s return immediately, but not yet complete (i…e the status is running). I checked the python library and they have a polling mechanism to wait for the status. I dont see the same in the elixir code base. This is true for create_and_run and and also just run
am I missing something?
lobo
@dlobo apologies for the delayed response. i only check the forum from time to time, so I didn’t see this. Apparently i only get immediate notifications if I am tagged in a message.
The library handles run status monitoring through streaming - just pass stream: true
when creating a run. The response includes a body_stream
field which is an Elixir Stream, allowing you to efficiently process run updates using Elixir’s Stream API for real-time monitoring.
If you need polling (in a non-streamed call), you can implement it using Runs.retrieve/2
. Here’s a simple example:
def poll_run(openai, thread_id, run_id, interval \\ 1000) do
case OpenaiEx.Beta.Threads.Runs.retrieve(openai, %{thread_id: thread_id, run_id: run_id}) do
{:ok, %{"status" => status}} when status in ["completed", "failed", "cancelled", "expired"] ->
{:done, status}
{:ok, %{"status" => status}} ->
Process.sleep(interval)
poll_run(openai, thread_id, run_id, interval)
error ->
error
end
end
Let me know if you have any other questions. Please tag me so it doesn’t slip through the cracks again