124 Commits

Author SHA1 Message Date
Yannick Stephan
50c7f26262 cleanup code 2025-02-08 23:58:15 +01:00
Yannick Stephan
2929d1fc39 fixed pipe 2025-02-08 23:52:27 +01:00
Yannick Stephan
d2db250ee7 added type to be more clear 2025-02-08 23:25:42 +01:00
ArnoChen
f5bf6a4af8 use namespace as neo4j database name
format

fix
2025-02-08 20:06:18 +08:00
ArnoChen
3f845e9e53 better handling of namespace 2025-02-08 16:05:59 +08:00
ArnoChen
b2b8cf0aa4 remove unused json_doc_status_storage 2025-02-08 13:56:12 +08:00
ArnoChen
88d691deb9 add namespace prefix to storage namespaces 2025-02-08 13:53:00 +08:00
ArnoChen
57b015bee1 fix doc_key filtering logic to handle dict status 2025-02-05 03:22:22 +08:00
zrguo
0c8a2bface
Merge pull request #701 from RayWang1991/main
fix DocStatus issue
2025-02-04 00:16:34 +08:00
ruirui
e825b079bc fix status error 2025-02-03 23:45:21 +08:00
ultrageopro
0284469fd4
doc: add information about log_dir parameter 2025-02-03 11:25:09 +03:00
ultrageopro
749003f380
fix: path for windows 2025-02-02 14:59:46 +03:00
ultrageopro
ba9c8cd734
fix: default log dir 2025-02-02 14:06:31 +03:00
ultrageopro
35c4115441
feat: custom log dir 2025-02-02 14:04:24 +03:00
zrguo
c07b5522fe
Merge pull request #695 from ShanGor/main
Fix the bug from main stream that using doc['status'] and improve Apache AGE performance
2025-02-02 18:27:11 +08:00
Samuel Chan
02ac96ff8e - Fix the bug from main stream that using doc['status']
- Improve the performance of Apache AGE.
- Revise the README.md for Apache AGE indexing.
2025-02-02 18:20:32 +08:00
yangdx
0a693dbfda Fix linting 2025-02-02 04:27:55 +08:00
yangdx
fdc9017ded Set embedding_func in all llm_response_cache 2025-02-02 03:14:07 +08:00
yangdx
b87703aea6 Add embedding_func to llm_response_cache 2025-02-01 22:19:16 +08:00
Gurjot Singh
8a624e198a Add faiss integration for storage 2025-01-31 19:00:36 +05:30
yangdx
e29682eef8 Allow configuration of LLM parameters through environment variables 2025-01-29 23:39:47 +08:00
ranfysvalle02
4c349c208d +MDB KG 2025-01-29 07:31:34 -05:00
Saifeddine ALOUI
56e9c9f4d5
Moved the storages to kg folder 2025-01-27 09:59:26 +01:00
Saifeddine ALOUI
6d95f58f34
Update lightrag.py 2025-01-27 09:34:00 +01:00
MdNazishArmanShorthillsAI
f0b2024667 Query with your custom prompts 2025-01-27 10:32:22 +05:30
hyb
3dba406644 feat: Added webui management, including file upload, text upload, Q&A query, graph database management (can view tags, view knowledge graph based on tags), system status (whether it is good, data storage status, model status, path),request /webui/index.html 2025-01-25 18:38:46 +08:00
Saifeddine ALOUI
34018cb1e0 Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
zrguo
cdf967cedd
Merge pull request #631 from 18277486571HYB/redis_impl
feat: 新增ini文件读取数据库配置方式,方便生产环境,修改Lightrag ainsert方法_add_doc_keys获取方式,原…
2025-01-25 01:44:46 +08:00
Magic_yuan
443aab2882 修复当出现异常时,会导致更新数据卡死的bug 2025-01-24 10:15:25 +08:00
hyb
ff71952c8c feat: 新增ini文件读取数据库配置方式,方便生产环境,修改Lightrag ainsert方法_add_doc_keys获取方式,原来只过滤存在的,但这会让失败的文档无法再次存储,新增--chunk_size和--chunk_overlap_size方便生产环境,新增llm_binding:openai-ollama 方便用openai的同时使用ollama embedding 2025-01-23 22:58:57 +08:00
hyb
e08905b398 feat: 增加redis KV存储,增加openai+neo4j+milvus+redis的demo测试,新增lightrag.py: RedisKVStorage,新增requirements.txt:aioredis依赖 2025-01-22 16:42:13 +08:00
zrguo
b84aab5cd0
Merge pull request #590 from jin38324/main
Enhance Robustness of insert Method with Pipeline Processing and Caching Mechanisms
2025-01-16 14:20:08 +08:00
jin
6ae8647285 support pipeline mode 2025-01-16 12:58:15 +08:00
jin
d5ae6669ea support pipeline mode 2025-01-16 12:52:37 +08:00
jin
17a2ec2bc4
Merge branch 'HKUDS:main' into main 2025-01-16 09:59:27 +08:00
Gurjot Singh
bc79f6650e Fix linting errors 2025-01-14 22:23:14 +05:30
Gurjot Singh
ef61ffe444 Add custom function with separate keyword extraction for user's query and a separate prompt 2025-01-14 22:10:47 +05:30
jin
85331e3fa2 update Oracle support
add cache support, fix bug
2025-01-10 11:36:28 +08:00
adikalra
acde4ed173 Add custom chunking function. 2025-01-09 17:20:24 +05:30
zrguo
b93203804c
Merge branch 'main' into main 2025-01-09 15:28:57 +08:00
zrguo
92ccfa2770
Merge pull request #555 from ParisNeo/main
Restore backwards compatibility for LightRAG's ainsert method
2025-01-09 15:27:09 +08:00
童石渊
dd213c95be 增加仅字符分割参数,如果开启,仅采用字符分割,不开启,在分割完以后如果chunk过大,会继续根据token size分割,更新测试文件 2025-01-09 11:55:49 +08:00
Saifeddine ALOUI
65c1450c66 fixed retro compatibility with ainsert by making split_by_character get a None default value 2025-01-08 20:50:22 +01:00
Gurjot Singh
9565a4663a Fix trailing whitespace and formatting issues in lightrag.py 2025-01-09 00:39:22 +05:30
Gurjot Singh
a940251390 Implement custom chunking feature 2025-01-07 20:57:39 +05:30
童石渊
6b19401dc6 chunk split retry 2025-01-07 16:26:12 +08:00
童石渊
536d6f2283 添加字符分割功能,在“insert”函数中如果增加参数split_by_character,则会按照split_by_character进行字符分割,此时如果每个分割后的chunk的tokens大于max_token_size,则会继续按token_size分割(todo:考虑字符分割后过短的chunk处理) 2025-01-07 00:28:15 +08:00
zrguo
990b684a85 Update lightrag.py 2025-01-06 15:27:31 +08:00
Samuel Chan
6ae27d8f06 Some enhancements:
- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server
- Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
2025-01-06 12:50:05 +08:00
Samuel Chan
60e8a355f0
Merge branch 'HKUDS:main' into main 2025-01-03 21:18:17 +08:00