Add support for custom client configurations in the OpenAI integration,
allowing for more flexible configuration of the AsyncOpenAI client.
This includes:
- Create a reusable helper function `create_openai_async_client`
- Add proper documentation for client configuration options
- Ensure consistent parameter precedence across the codebase
- Update the embedding function to support client configurations
- Add example script demonstrating custom client configuration usage
The changes maintain backward compatibility while providing a cleaner
and more maintainable approach to configuring OpenAI clients.
https://github.com/HKUDS/LightRAG/pull/864#issuecomment-2669705946
- Created two new example scripts demonstrating LightRAG integration with LlamaIndex:
- `lightrag_llamaindex_direct_demo.py`: Direct OpenAI integration
- `lightrag_llamaindex_litellm_demo.py`: LiteLLM proxy integration
- Both examples showcase different search modes (naive, local, global, hybrid)
- Includes configuration for working directory, models, and API settings
- Demonstrates text insertion and querying using LightRAG with LlamaIndex
- removed wrapper directory and references to it
- Implemented LlamaIndex interface for language model interactions
- Added async chat completion support
- Included embedding generation functionality
- Implemented retry mechanisms for API calls
- Added configuration and message formatting utilities
- Supports OpenAI-style message handling and external settings
• Add User-Agent header with version info
• Update header creation in Ollama client
• Update header creation in OpenAI client
• Ensure consistent header format
• Include Mozilla UA string for OpenAI