Openai_ex - OpenAI API client library


It appears that the second authorization method mentioned here (Microsoft Entra ID), results in an API call that’s identical to the OpenAI call. You just have to use the Entra Id token in place of your OpenAI api key in You would not have to make any changes in the library code.

The second change in your commit changes the actual URL path. It may be possible to maintain a static mapping of OpenAI URLs to Azure (or other 3rd party) URLs which is applied before the API call. I’m not terribly keen to do this, but will consider it, if it’s possible to keep the existing call signatures undisturbed.

If MS has their own separate OpenAPI spec, instead of re-using the OpenAI one, that’s a sign that they intend to go their own way in the future. In that case, it may be better to create a separate azureai_ex library right now, instead of forcing this one to perform double duty.

@rched I have created an issue for this.

As outlined above, I am calling a path mapping function to determine the URL just before making the API call. I have enabled this for the chat completion API in this issue branch.

Please try this out and see if this approach solves your problem.

If it does, we can apply it to all the API calls. The call signatures and semantics remain unchanged for everyone who doesn’t use the path mapping function.

I will not explicitly commit to supporting Azure, but as long as the API doesn’t deviate too far from OpenAI, this should continue to work.

If this works, it would be great if you could contribute a livebook to document how the Azure configuration should be done.

@restlessronin I’ll give that a try if I get some time to figure out Entra IDs though they seem to add significant friction to the setup process which may make it tough to rely on for my use case.

1 Like

@rched If it’s that much of a pain, let me see what I can do to provide an alternative. i’ll try to come up with something today. since i don’t use azure myself, it would be great if you could test and provide feedback. possibly a livebook on the setup that works for you.

@rched I have created another approach that should work. Please try this , and let me know if it’s working.

@rched I have now packaged what I believe to be basic azure support in a for_azure function. All you should have to do is call it.
I do not work with Azure. Can you please test this to ensure that it works?

If someone other than @rched is interested in Azure support, please try this and see if this is a satisfactory solution.

@restlessronin Yep works great. Thanks for adding this. I’d be happy to open a PR with a livebook if you’d like.

@rched thanks for testing. I’ve added some (non executable) documentation to the main user guide. I’m not sure if a separate livebook adds any value over that. WDYT?

I have released v0.5.7 with Azure OpenAI support

Shoutout to @rched for initiating the work, providing references and code samples and testing.

I agree. A separate livebook seems unnecessary.

1 Like

Published v0.5.8 which includes this PR.

Thanks to github user @kernel-io for the PR.

Published v0.5.9 which is an alternative fix to the bug reported in this PR .

Thanks to @aramallo for the PR.

Published v0.6.0 which

  1. Implements the new Batch API
  2. Changes many of the module FQNs to match those of the python library.
  3. Preliminary support for comparing against the latest OpenAPI spec.
  4. Some additions to the accepted parameters for various API endpoints.

I’ve just published v0.6.2 to fix a bug in 0.6.1 released earlier today which

  1. adds support for Assistants API beta 2 including vector stores (except for streaming runs - coming soon)
  2. removes Assistants API beta 1 support
  3. accommodates what looks like a bug in the LMStudio streaming server.

I have just published v0.6.3 with support for Run execution streaming. At this point the library should be up to date with the entire (non deprecated) OpenAI API (as of May 1, 2024).

Shoutout to @eddy147 for bringing the (lack of) Run execution streaming to my attention.

Released v0.6.4 with fix for bad Run streaming payload.

This appears to be due to a (possibly deliberate) deviation from the SSE spec. I have already asked about this on the OpenAI forums

Shoutout to @Xantrac for bringing this to my attention.

Published v0.6.5 with fix for non-interactive handling of stream errors.

This has been on the roadmap for a while, but I was waiting for a concrete example and suggested ‘correct’ behaviour before proceeding. Shoutout to @aramallo for providing both.

I have published v0.7.0 with support for stream timeouts and exception raising during stream processing.

  1. Implemented exception raising during stream processing, including for user-initiated cancellations. Note: User code will need to be modified to handle these exceptions in cases where cancellation is supported.
  2. Users can now set a custom timeout for SSE streams, allowing better control over long-running operations. AFAIK this is only useful when working with third-party OpenAI API implementations.

Shoutout to @aramallo for providing the stream timeout code, and the base version of a non-local return. Also to github user @Madd0g for feedback on the proposed solution.

The user-guide and streaming-orderbot livebooks have been revised to reflect these new features, providing examples for working with user cancellation, stream timeouts and in general handling the new exception raising behavior.

I have published v0.8.0 with the long pending enhancement - systematic error handling for non livebook use cases. Shoutout to everyone who upvoted that issue.

  • Functions now return :ok and :error tuples for explicit error handling.
  • New bang (!) versions of functions that raise exceptions on errors.
  • Error types align with the official OpenAI Python library

The downside is that this requires changes to most client code.

I am also going to remove references to Livebook from the description, since it’s giving people the impression that the library can’t be used with (for example) Phoenix.