127 Commits

Author SHA1 Message Date
yangdx
d25386ff1b refactor: simplify storage configuration handling while maintaining the same functionality 2025-02-13 04:04:51 +08:00
yangdx
9a77d91023 Add LLM response cache to registered RagServer components 2025-02-13 01:30:21 +08:00
yangdx
4c39cf399d refactor: move database connection pool initialization to lifespan of FastAPI
- Add proper database connection lifecycle management
- Add connection pool cleanup in FastAPI lifespan
2025-02-13 01:11:09 +08:00
yangdx
aaddc08336 Add storage info to splash screen 2025-02-11 14:57:37 +08:00
yangdx
56c1792767 feat optimize storage configuration and environment variables
* add storage type compatibility validation table
* add enviroment variables check for storage
* modify storage init to get setting from confing.ini and env
2025-02-11 00:55:52 +08:00
ArnoChen
09195182c2 enable MongoGraphStorage based on config
mongo graph
2025-02-10 01:07:41 +08:00
ArnoChen
294d094076 use "database" instead of "LightRAG" for MongoDB config 2025-02-10 01:00:02 +08:00
ArnoChen
c858e4a4e6 add qdrant backend 2025-02-10 00:57:28 +08:00
ArnoChen
f974bf39bb format
format
2025-02-08 13:53:00 +08:00
ArnoChen
88d691deb9 add namespace prefix to storage namespaces 2025-02-08 13:53:00 +08:00
yangdx
070501fdaa Merge branch 'add-keyword-extraction-param-for-llm' into fix-mutable-default-param 2025-02-06 16:22:30 +08:00
yangdx
eb9883d8da fix: add keyword_extraction param support for LLM func of API Server 2025-02-06 15:56:18 +08:00
yangdx
eb5f57e989 fix: Fix potential mutable default parameter issue 2025-02-06 14:46:07 +08:00
yangdx
fd9b3b2658 Fix splash screen SSL line connector type. 2025-02-06 01:21:42 +08:00
yangdx
1a61d9ee7f Fix linting 2025-02-05 22:29:07 +08:00
yangdx
f703334ce4 Split the Ollama API implementation to a separated file 2025-02-05 22:15:14 +08:00
yangdx
f77faf8023 Fix linting 2025-02-05 12:36:52 +08:00
yangdx
4663dcfbab Merge branch 'main' into handle-stream-cancel-error 2025-02-05 12:27:05 +08:00
yangdx
f1ea7f7415 update error response format in streaming API to a normal message. So user can get what's going on. 2025-02-05 11:07:31 +08:00
yangdx
24effb127d Improve error handling and response consistency in streaming endpoints
• Add error message forwarding to client
• Handle stream cancellations gracefully
• Add logging for stream errors
• Ensure clean stream termination
• Add try-catch in OpenAI streaming
2025-02-05 10:44:48 +08:00
yangdx
ff40e61fad Fix linting 2025-02-05 09:47:39 +08:00
yangdx
69f200faf2 feat: improve error handling for streaming responses
• Add CancelledError handling for streams
• Send error details to client in JSON
• Add error status codes and messages
• Always send final completion marker
• Refactor stream generator error handling
2025-02-05 09:46:56 +08:00
ArnoChen
eb77af8e7d Reapply "Integrated the GraphML Visualizer as an optional component of LightRAG"
This reverts commit b2bc7770fd9d1f55dfec8e06b646bda3ecd609ea.
2025-02-05 02:33:26 +08:00
zrguo
b2bc7770fd
Revert "Integrated the GraphML Visualizer as an optional component of LightRAG" 2025-02-05 01:30:57 +08:00
Saifeddine ALOUI
9a30dc7b04 Integrated the graphml visualizer as part of lightrag and made it a component that can be installed using [tools] option 2025-02-03 22:51:46 +01:00
Saifeddine ALOUI
797b5fa463
Merge branch 'HKUDS:main' into main 2025-02-03 22:05:59 +01:00
Saifeddine ALOUI
da6864d9c6
Merge branch 'HKUDS:main' into main 2025-02-03 11:24:08 +01:00
yangdx
5cf875755a Update API endpoint documentation to clarify Ollama server compatibility
• Add Ollama server doc for /api/tags
• Update /api/generate endpoint docs
• Update /api/chat endpoint docs
2025-02-03 13:07:08 +08:00
yangdx
4ab02a878f Fix linting 2025-02-03 12:39:52 +08:00
yangdx
a8f7b7e2b7 Add "/bypass" mode to skip context retrieval and directly use LLM
• Added SearchMode.bypass enum value
• Added /bypass prefix handler
• Skip RAG when in bypass mode
• Pass conversation history to LLM
• Apply bypass mode for both stream/non-stream
2025-02-03 11:49:17 +08:00
Saifeddine ALOUI
c65dcff991 Fixed a typo 2025-02-02 09:47:05 +01:00
yangdx
7ea1856699 Add comment to clarify LLM cache setting for entity extraction 2025-02-02 07:29:01 +08:00
yangdx
6e1b5d6ce6 Merge branch 'main' into fix-concurrent-problem 2025-02-02 04:36:52 +08:00
yangdx
ecf48a5be5 Add embedding cache config and disable LLM cache for entity extraction for API Server 2025-02-02 04:27:21 +08:00
yangdx
95edf8a51e Fix linting 2025-02-01 15:22:40 +08:00
Saifeddine ALOUI
3a40772d30 Simplified file loading 2025-02-01 01:19:32 +01:00
Saifeddine ALOUI
e09cb85f37 fixed linting as well as file path 2025-02-01 01:15:06 +01:00
Saifeddine ALOUI
ef35f9a4e4 Introduced docling instead of other tools for loading files 2025-02-01 00:56:43 +01:00
zrguo
e59cb7493c fixed linting 2025-01-31 23:35:42 +08:00
Saifeddine ALOUI
6889606a48
Update lightrag_server.py 2025-01-31 11:19:12 +01:00
Saifeddine ALOUI
381f7deec6 linting 2025-01-30 23:29:21 +01:00
Saifeddine ALOUI
219cbab1e3 Added progress when scanning files and fixed some bugs in the API 2025-01-30 23:27:43 +01:00
yangdx
5c7b2d7c9f Merge from main 2025-01-29 22:13:01 +08:00
yangdx
7ff8c7b9d8 Add timeout parameter to OpenAI alike LLM model configuration 2025-01-29 21:35:46 +08:00
yangdx
7aedc08caf Add RAG configuration options and enhance parameter configurability
- Add top-k and cosine-threshold parms for api server
- Update .env and cli parms handling with new parameters
- Improve splash screen display
- Update bash and storage classes to read new parameters from .env file.
2025-01-29 21:34:34 +08:00
Saifeddine ALOUI
b5d09725f5 linting 2025-01-28 18:20:45 +01:00
Saifeddine ALOUI
4ab1deaf25
Fixed --simulated-model-name argument 2025-01-28 15:32:41 +01:00
Saifeddine ALOUI
d493830b10
Update lightrag_server.py 2025-01-28 15:30:36 +01:00
Saifeddine ALOUI
9fdea743e4
Update lightrag_server.py 2025-01-28 15:03:26 +01:00
zrguo
7dd6bd4ceb
Merge branch 'main' into main 2025-01-27 23:16:06 +08:00