208 Commits

Author SHA1 Message Date
yangdx
24effb127d Improve error handling and response consistency in streaming endpoints
• Add error message forwarding to client
• Handle stream cancellations gracefully
• Add logging for stream errors
• Ensure clean stream termination
• Add try-catch in OpenAI streaming
2025-02-05 10:44:48 +08:00
yangdx
ff40e61fad Fix linting 2025-02-05 09:47:39 +08:00
yangdx
69f200faf2 feat: improve error handling for streaming responses
• Add CancelledError handling for streams
• Send error details to client in JSON
• Add error status codes and messages
• Always send final completion marker
• Refactor stream generator error handling
2025-02-05 09:46:56 +08:00
ArnoChen
eb77af8e7d Reapply "Integrated the GraphML Visualizer as an optional component of LightRAG"
This reverts commit b2bc7770fd9d1f55dfec8e06b646bda3ecd609ea.
2025-02-05 02:33:26 +08:00
zrguo
b2bc7770fd
Revert "Integrated the GraphML Visualizer as an optional component of LightRAG" 2025-02-05 01:30:57 +08:00
Saifeddine ALOUI
9a30dc7b04 Integrated the graphml visualizer as part of lightrag and made it a component that can be installed using [tools] option 2025-02-03 22:51:46 +01:00
Saifeddine ALOUI
797b5fa463
Merge branch 'HKUDS:main' into main 2025-02-03 22:05:59 +01:00
Saifeddine ALOUI
da6864d9c6
Merge branch 'HKUDS:main' into main 2025-02-03 11:24:08 +01:00
yangdx
5cf875755a Update API endpoint documentation to clarify Ollama server compatibility
• Add Ollama server doc for /api/tags
• Update /api/generate endpoint docs
• Update /api/chat endpoint docs
2025-02-03 13:07:08 +08:00
yangdx
4ab02a878f Fix linting 2025-02-03 12:39:52 +08:00
yangdx
a8f7b7e2b7 Add "/bypass" mode to skip context retrieval and directly use LLM
• Added SearchMode.bypass enum value
• Added /bypass prefix handler
• Skip RAG when in bypass mode
• Pass conversation history to LLM
• Apply bypass mode for both stream/non-stream
2025-02-03 11:49:17 +08:00
Saifeddine ALOUI
c65dcff991 Fixed a typo 2025-02-02 09:47:05 +01:00
yangdx
7ea1856699 Add comment to clarify LLM cache setting for entity extraction 2025-02-02 07:29:01 +08:00
yangdx
6e1b5d6ce6 Merge branch 'main' into fix-concurrent-problem 2025-02-02 04:36:52 +08:00
yangdx
ecf48a5be5 Add embedding cache config and disable LLM cache for entity extraction for API Server 2025-02-02 04:27:21 +08:00
yangdx
95edf8a51e Fix linting 2025-02-01 15:22:40 +08:00
Saifeddine ALOUI
3a40772d30 Simplified file loading 2025-02-01 01:19:32 +01:00
Saifeddine ALOUI
e09cb85f37 fixed linting as well as file path 2025-02-01 01:15:06 +01:00
Saifeddine ALOUI
ef35f9a4e4 Introduced docling instead of other tools for loading files 2025-02-01 00:56:43 +01:00
zrguo
e59cb7493c fixed linting 2025-01-31 23:35:42 +08:00
Saifeddine ALOUI
6889606a48
Update lightrag_server.py 2025-01-31 11:19:12 +01:00
Saifeddine ALOUI
381f7deec6 linting 2025-01-30 23:29:21 +01:00
Saifeddine ALOUI
219cbab1e3 Added progress when scanning files and fixed some bugs in the API 2025-01-30 23:27:43 +01:00
yangdx
5c7b2d7c9f Merge from main 2025-01-29 22:13:01 +08:00
yangdx
7ff8c7b9d8 Add timeout parameter to OpenAI alike LLM model configuration 2025-01-29 21:35:46 +08:00
yangdx
7aedc08caf Add RAG configuration options and enhance parameter configurability
- Add top-k and cosine-threshold parms for api server
- Update .env and cli parms handling with new parameters
- Improve splash screen display
- Update bash and storage classes to read new parameters from .env file.
2025-01-29 21:34:34 +08:00
Saifeddine ALOUI
b5d09725f5 linting 2025-01-28 18:20:45 +01:00
Saifeddine ALOUI
4ab1deaf25
Fixed --simulated-model-name argument 2025-01-28 15:32:41 +01:00
Saifeddine ALOUI
d493830b10
Update lightrag_server.py 2025-01-28 15:30:36 +01:00
Saifeddine ALOUI
9fdea743e4
Update lightrag_server.py 2025-01-28 15:03:26 +01:00
zrguo
7dd6bd4ceb
Merge branch 'main' into main 2025-01-27 23:16:06 +08:00
zrguo
ba40a8de8b
Merge pull request #659 from danielaskdd/cvs_robustness
Enhance robustness of CVS processing ,Fix potential CSV parsing issues
2025-01-27 23:12:43 +08:00
Saifeddine ALOUI
0721ee303c
Fixed files list 2025-01-27 12:02:22 +01:00
yangdx
c8d384f15f Fix linting 2025-01-27 16:12:30 +08:00
yangdx
6d61b37c03 Add type ignore for pptx import 2025-01-27 16:07:03 +08:00
Saifeddine ALOUI
f307ed43f5 fixed linting 2025-01-27 02:10:24 +01:00
Saifeddine ALOUI
a68aebb124 translated docstrings to english and enhanced the webui 2025-01-27 02:07:06 +01:00
zrguo
bd2b3f334e
Merge pull request #650 from danielaskdd/Add-history-support-for-ollama-api
Add history support for ollama api
2025-01-27 06:34:10 +08:00
zrguo
28b139d074
Merge pull request #647 from 18277486571HYB/redis_impl
feat:  Added webui management, including file upload, text upload, Q&…
2025-01-27 06:32:15 +08:00
yangdx
03604d3186 Fix linting 2025-01-27 02:46:21 +08:00
yangdx
01288debd1 Ensure splash output flush to system log 2025-01-27 02:45:44 +08:00
yangdx
f045fc3d59 Update API endpoint documentation 2025-01-26 11:36:24 +08:00
hyb
cd5b1dc98f fix: light_server.py fix 2025-01-26 09:13:11 +08:00
yangdx
f8d26cb193 Update default history turns to 3 2025-01-26 05:19:51 +08:00
yangdx
25a58a6545 Fix linting 2025-01-26 05:10:57 +08:00
yangdx
9f80c1904f Refactoring command line argurements handling logic, add more RAG config to splash screen 2025-01-26 05:09:42 +08:00
yangdx
e4e42a8ec4 Fetch chunk size and chunk overlap size from .env file 2025-01-26 02:31:16 +08:00
yangdx
2719e07107 Convert OllamaMessage to dict in conversation history 2025-01-25 22:33:09 +08:00
yangdx
fb9df50ada Add conversation history support to chat API
- Added HISTORY_TURNS env variable
- Updated chat request data creation
- Modified server to handle history
- Added history to test cases
2025-01-25 22:14:40 +08:00
hyb
3dba406644 feat: Added webui management, including file upload, text upload, Q&A query, graph database management (can view tags, view knowledge graph based on tags), system status (whether it is good, data storage status, model status, path),request /webui/index.html 2025-01-25 18:38:46 +08:00