This avoids issues of resource cleanup (too many open files) when dealing with massively parallel calls to the openai API since RAII in python is highly unreliable in such contexts.
- Corrects the syntax of retry_if_exception_type decorators to ensure proper exception handling and retry behavior
- Implements proper resource cleanup for async iterators to prevent memory leaks and potential SIGSEGV errors
Add support for custom client configurations in the OpenAI integration,
allowing for more flexible configuration of the AsyncOpenAI client.
This includes:
- Create a reusable helper function `create_openai_async_client`
- Add proper documentation for client configuration options
- Ensure consistent parameter precedence across the codebase
- Update the embedding function to support client configurations
- Add example script demonstrating custom client configuration usage
The changes maintain backward compatibility while providing a cleaner
and more maintainable approach to configuring OpenAI clients.
• Add User-Agent header with version info
• Update header creation in Ollama client
• Update header creation in OpenAI client
• Ensure consistent header format
• Include Mozilla UA string for OpenAI