openai_responses library has been updated to v0.5.1, bringing significant enhancements since v0.4.0. This release adds union type support, manual function calling control, built-in error handling with retry logic, and flexible API options. See the full changelog for complete details.
Union Types with anyOf
Define properties that can be multiple types:
alias OpenAI.Responses
Responses.create!(
input: "Generate a product listing",
schema: %{
price: {:anyOf, [:number, :string]}, # Can be 29.99 or "$29.99"
tags: {:anyOf, [:string, {:array, :string}]} # Can be "electronics" or ["laptop", "gaming"]
}
)
Manual Function Calling
Take control of function execution with the new call_functions/2
:
# Get a response with function calls
{:ok, response} = Responses.create(
input: "What's the weather in Paris and London?",
tools: [weather_tool]
)
# Manually execute functions with custom logic
outputs = Responses.call_functions(response.function_calls, %{
"get_weather" => fn %{"location" => city} ->
# Add logging, caching, or modify results
Logger.info("Weather requested for #{city}")
weather = fetch_weather_from_api(city)
%{temperature: weather.temp, conditions: weather.desc, cached_at: DateTime.utc_now()}
end
})
# Continue conversation with enriched results
{:ok, final} = Responses.create(response, input: outputs)
Error Handling with Retry Support
The new OpenAI.Responses.Error
module provides intelligent error handling:
case Responses.create(input: "Hello") do
{:ok, response} ->
IO.puts(response.text)
{:error, error} ->
if OpenAI.Responses.Error.retryable?(error) do
# Retry with exponential backoff for 429, 500, 503, or timeout errors
:timer.sleep(1000)
# retry the request
else
# Handle non-retryable errors
Logger.error("API error: #{error.message}")
end
end
Flexible API Options
All major functions now accept both keyword lists and maps:
# Traditional keyword list
Responses.create(input: "Hello", model: "gpt-4o", temperature: 0.7)
# Using maps (great for dynamic options)
options = %{input: "Hello", model: "gpt-4o", temperature: 0.7}
Responses.create(options)
# Works with streaming too
Responses.stream(%{
input: "Write a story",
stream: Responses.Stream.delta(&IO.write/1)
})
Additional Improvements
- Model preservation in follow-up responses (no more accidentally switching models)
- Function calls now properly handle JSON-encodable return values
- Support for string keys in schema definitions (database-friendly)
- Cost calculation for new models including o3
- Enhanced documentation and LLM usage guide
The API remains backward compatible while providing more flexibility and control over your AI interactions.