https://github.com/HKUDS/LightRAG/pull/864#issuecomment-2669705946
- Created two new example scripts demonstrating LightRAG integration with LlamaIndex:
- `lightrag_llamaindex_direct_demo.py`: Direct OpenAI integration
- `lightrag_llamaindex_litellm_demo.py`: LiteLLM proxy integration
- Both examples showcase different search modes (naive, local, global, hybrid)
- Includes configuration for working directory, models, and API settings
- Demonstrates text insertion and querying using LightRAG with LlamaIndex
- removed wrapper directory and references to it
- Implemented LlamaIndex interface for language model interactions
- Added async chat completion support
- Included embedding generation functionality
- Implemented retry mechanisms for API calls
- Added configuration and message formatting utilities
- Supports OpenAI-style message handling and external settings