- Remove optional 'modes' parameter from aclear_cache() and clear_cache() methods
- Replace deprecated drop_cache_by_modes() with drop() method for complete cache clearing
- Update API endpoint to ignore mode-specific parameters and clear all cache
- Simplify frontend clearCache() function to send empty request body
This change ensures all LLM cache is cleared together.
- Add global --temperature command line argument with env fallback
- Implement temperature priority for Ollama LLM binding:
1. --ollama-llm-temperature (highest)
2. OLLAMA_LLM_TEMPERATURE env var
3. --temperature command arg
4. TEMPERATURE env var (lowest)
- Implement same priority logic for OpenAI/Azure OpenAI LLM binding
- Ensure command line args always override environment variables
- Maintain backward compatibility with existing configurations
- Add OpenAILLMOptions dataclass with full OpenAI API parameter support
- Integrate OpenAI options in config.py for automatic binding detection
- Update server functions to inject OpenAI options for openai/azure_openai bindings
- Implement OLLAMA_LLM_TEMPERATURE env var
- Fallback to global TEMPERATURE if unset
- Remove redundant OllamaLLMOptions logic
- Update env.example with new setting
- The `initialize_storages` method must be explicitly called after LightRAG creation.
The `finalize_storages` method should be called before LightRAG destyoyed.
- Added explicit data migration check
- Add file_path sorting support to all database backends (JSON, Redis, PostgreSQL, MongoDB)
- Implement smart column header switching between "ID" and "File Name" based on display mode
- Add automatic sort field switching when toggling between ID and file name display
- Create composite indexes for workspace+file_path in PostgreSQL and MongoDB for better query performance
- Update frontend to maintain sort state when switching display modes
- Add internationalization support for "fileName" in English and Chinese locales
This enhancement improves user experience by providing intuitive file-based sorting
while maintaining performance through optimized database indexes.
- Add pagination support to BaseDocStatusStorage interface and all implementations (PostgreSQL, MongoDB, Redis, JSON)
- Implement RESTful API endpoints for paginated document queries and status counts
- Create reusable pagination UI components with internationalization support
- Optimize performance with database-level pagination and efficient in-memory processing
- Maintain backward compatibility while adding configurable page sizes (10-200 items)
- Add metadata field to doc_status storage with Unix timestamps for processing start/end times
- Update frontend API types: error -> error_msg, add track_id and metadata support
- Add getTrackStatus API method for document tracking functionality
- Fix frontend DocumentManager to use error_msg field for proper error display
- Ensure full compatibility between backend metadata changes and frontend UI