940 Commits

Author SHA1 Message Date
yangdx
488028b9e2 Remove separate requirements.txt and update Dockerfile to use pip install 2025-07-18 01:58:46 +08:00
yangdx
99527027de feat: change default query mode from hybrid to mix
- Update default mode for Ollama chat endpoint
- Update default mode for query endpoint of LightRAG
2025-07-17 19:21:15 +08:00
yangdx
e828539b24 Update README 2025-07-17 19:05:34 +08:00
yangdx
b321afefaa Bump core version to 1.4.3 and api version to 0186 2025-07-17 16:58:57 +08:00
yangdx
f3c0dab7ce Bump core version to 1.4.2 and api version to 0185 2025-07-17 12:26:10 +08:00
yangdx
910c6973f3 Limit file deletion to current directory only after document cleaning 2025-07-16 20:35:24 +08:00
yangdx
2bf0d397ed Update webui assets 2025-07-16 10:18:51 +08:00
yangdx
e4f62de727 Bump aip version to 0184 2025-07-16 04:57:46 +08:00
yangdx
500e940f75 Remove max token summary display from splash screen 2025-07-16 04:55:32 +08:00
yangdx
0adb5f2595 Update webui assets 2025-07-16 01:39:48 +08:00
Daniel.y
b44c8d46a5
Merge pull request #1782 from HKUDS/rerank
Refactor the token control system
2025-07-16 00:23:25 +08:00
yangdx
5f7cb437e8 Centralize query parameters into LightRAG class
This commit refactors query parameter management by consolidating settings like `top_k`, token limits, and thresholds into the `LightRAG` class, and consistently sourcing parameters from a single location.
2025-07-15 23:56:49 +08:00
yangdx
089346f8df Bump api version to 0183 2025-07-15 19:52:50 +08:00
yangdx
93b25a65d5 Update webui assets 2025-07-15 18:10:00 +08:00
yangdx
661a41f9eb Update webui assets 2025-07-15 17:25:39 +08:00
yangdx
47341d3a71 Merge branch 'main' into rerank 2025-07-15 16:12:33 +08:00
yangdx
e8e1f6ab56 feat: centralize environment variable defaults in constants.py 2025-07-15 16:11:50 +08:00
Daniel.y
6d1260aafa
Merge pull request #1766 from HKUDS/fix-memgraph-max-nodes-issue
Fix Memgraph get_knowledge_graph issues
2025-07-15 16:07:04 +08:00
yangdx
ccc2a20071 feat: remove deprecated MAX_TOKEN_SUMMARY parameter to prevent LLM output truncation
- Remove MAX_TOKEN_SUMMARY parameter and related configurations
- Eliminate forced token-based truncation in entity/relationship descriptions
- Switch to fragment-count based summarization logic using FORCE_LLM_SUMMARY_ON_MERGE
- Update FORCE_LLM_SUMMARY_ON_MERGE default from 6 to 4 for better summarization
- Clean up documentation, environment examples, and API display code
- Preserve backward compatibility by graceful parameter removal

This change resolves issues where LLMs were forcibly truncating entity relationship
descriptions mid-sentence, leading to incomplete and potentially inaccurate knowledge
graph content. The new approach allows LLMs to generate complete descriptions while
still providing summarization when multiple fragments need to be merged.

Breaking Change: None - parameter removal is backward compatible
Fixes: Entity relationship description truncation issues
2025-07-15 12:26:33 +08:00
zrguo
7c882313bb remove chunk_rerank_top_k 2025-07-15 11:52:34 +08:00
yangdx
9afe578fe7 Update webui assets 2025-07-14 17:56:51 +08:00
zrguo
c2da2fbe12 build 2025-07-14 17:19:28 +08:00
zrguo
c9cbd2d3e0 Merge branch 'main' into rerank 2025-07-14 16:24:29 +08:00
zrguo
ef2115d437 Update token limit 2025-07-14 15:53:48 +08:00
yangdx
b03bb48e24 feat: Refine summary logic and add dedicated Ollama num_ctx config
- Refactor the trigger condition for LLM-based summarization of entities and relations. Instead of relying on character length, the summary is now triggered when the number of merged description fragments exceeds a configured threshold. This provides a more robust and logical condition for consolidation.
- Introduce the `OLLAMA_NUM_CTX` environment variable to explicitly configure the context window size (`num_ctx`) for Ollama models. This decouples the model's context length from the `MAX_TOKENS` parameter, which is now specifically used to limit input for summary generation, making the configuration clearer and more flexible.
- Updated `README` files, `env.example`, and default values to reflect these changes.
2025-07-14 01:55:04 +08:00
yangdx
e8b3dfcf90 Bump api verion to 0182 2025-07-14 00:29:48 +08:00
yangdx
e4aef36977 Update webui assets 2025-07-13 01:36:25 +08:00
yangdx
efc359c411 Update webui assets 2025-07-13 00:57:41 +08:00
yangdx
0e3aaa318f Feat: Add keyed lock cleanup and status monitoring 2025-07-13 00:09:00 +08:00
yangdx
943ead8b1d Bump api version to 0181 2025-07-12 05:59:13 +08:00
DavIvek
a0c4d88b0d wip fix Memgraph get_knowledge_graph issues 2025-07-10 16:56:44 +02:00
yangdx
2056c3c809 Increase default CHUNK_TOP_K from 5 to 15 2025-07-09 04:41:51 +08:00
zrguo
d4651d59c1 Add rerank to server 2025-07-08 21:44:20 +08:00
yangdx
a1bbf367ad Update webui assets 2025-07-08 00:22:14 +08:00
yangdx
ef79088f60 Move max_graph_nodes to global config 2025-07-07 21:53:57 +08:00
yangdx
cb14ce6ff3 Bump api version to 0180 2025-07-07 18:14:31 +08:00
yangdx
f417118e27 Center banner text dynamically 2025-07-07 17:28:59 +08:00
yangdx
f86ae6df0a Update api server README 2025-07-07 17:16:14 +08:00
yangdx
7916b3d18f Update webui assets 2025-07-07 03:45:19 +08:00
yangdx
1d24e8ca3c Bump api version to 0179 2025-07-07 01:40:26 +08:00
yangdx
db22cad2c8 feat: add workspace and MAX_GRAPH_NODES to /health endpoint and webui 2025-07-07 01:39:48 +08:00
yangdx
253833475f Add workspace info to splash screen display 2025-07-07 01:26:27 +08:00
yangdx
033098c1bc Feat: Add WORKSPACE support to all storage types 2025-07-07 00:57:21 +08:00
yangdx
98150e80b8 Improved empty/whitespace file handling
- Better detection of whitespace-only files
- Changed error to warning for empty chunks
2025-07-05 23:16:39 +08:00
xuewei
49cb51b5dc PDF文件解析不到内容 2025-07-05 13:47:47 +08:00
yangdx
04d793abbd Update logger message 2025-07-03 22:15:32 +08:00
yangdx
67f51597c2 Bump api version to 0178 2025-07-03 21:37:47 +08:00
yangdx
05231233f1 Feat: Check pending equest_pending after document deletion
- Add double-check for pipeline status to prevent race conditions
- Implement automatic processing of pending indexing requests after deletion
2025-07-03 21:36:35 +08:00
yangdx
3be8727e3e Bump api version to 0177 2025-07-02 16:35:22 +08:00
SLKun
4e88ee3662 update ollama compatible api 2025-06-30 10:41:35 +08:00