200 Commits

Author SHA1 Message Date
Saifeddine ALOUI
34018cb1e0 Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
hyb
3c5ced835e feat: 增加redis KV存储,增加openai+neo4j+milvus+redis的demo测试,新增lightrag.py: RedisKVStorage,新增requirements.txt:aioredis依赖 2025-01-22 16:56:40 +08:00
hyb
e08905b398 feat: 增加redis KV存储,增加openai+neo4j+milvus+redis的demo测试,新增lightrag.py: RedisKVStorage,新增requirements.txt:aioredis依赖 2025-01-22 16:42:13 +08:00
hyb
ce378a3d86 feat: 增加redis KV存储,增加openai+能o4j+milvus+redis的demo测试,新增lightrag.py: RedisKVStorage,新增requirements.txt:aioredis依赖 2025-01-21 21:42:34 +08:00
luohuanhuan2019
6fa955c567 add readme_zh 2025-01-16 20:53:18 +08:00
zrguo
b84aab5cd0
Merge pull request #590 from jin38324/main
Enhance Robustness of insert Method with Pipeline Processing and Caching Mechanisms
2025-01-16 14:20:08 +08:00
Gurjot Singh
2ea104d738 Fix linting errors 2025-01-16 11:31:22 +05:30
Gurjot Singh
e64805c9e2 Add example usage for separate keyword extraction of user's query 2025-01-16 11:26:19 +05:30
jin
6ae8647285 support pipeline mode 2025-01-16 12:58:15 +08:00
jin
d5ae6669ea support pipeline mode 2025-01-16 12:52:37 +08:00
jin
17a2ec2bc4
Merge branch 'HKUDS:main' into main 2025-01-16 09:59:27 +08:00
Samuel Chan
d1ba8c5db5 Add some script in examples to copy llm cache from one solution to another 2025-01-16 07:56:13 +08:00
Samuel Chan
d91a330e9d Enrich README.md for postgres usage, make some change to cater python version<12 2025-01-15 12:02:55 +08:00
jin
85331e3fa2 update Oracle support
add cache support, fix bug
2025-01-10 11:36:28 +08:00
童石渊
dd213c95be 增加仅字符分割参数,如果开启,仅采用字符分割,不开启,在分割完以后如果chunk过大,会继续根据token size分割,更新测试文件 2025-01-09 11:55:49 +08:00
jin
957bcf8659 Organize files
move some test files from root to example
2025-01-07 13:51:20 +08:00
Samuel Chan
6ae27d8f06 Some enhancements:
- Enable the llm_cache storage to support get_by_mode_and_id, to improve the performance for using real KV server
- Provide an option for the developers to cache the LLM response when extracting entities for a document. Solving the paint point that sometimes the process failed, the processed chunks we need to call LLM again, money and time wasted. With the new option (by default not enabled) enabling, we can cache that result, can significantly save the time and money for beginners.
2025-01-06 12:50:05 +08:00
Samuel Chan
6c1b669f0f Fix the lint issue 2025-01-04 18:49:32 +08:00
Samuel Chan
e053223ef0 Fix the lint issue 2025-01-04 18:34:35 +08:00
Samuel Chan
db663114a9 Remove unused import 2025-01-03 21:20:47 +08:00
Samuel Chan
f6f62c32a8 Fix the bug of AGE processing 2025-01-03 21:10:06 +08:00
Samuel Chan
b17cb2aa95 With a draft for progres_impl 2025-01-01 22:43:59 +08:00
Saifeddine ALOUI
f2b52a2a38 Added azure openai lightrag server to the api install and fused documentation. 2024-12-26 21:32:56 +01:00
zrguo
fffd00d514
Update graph_visual_with_neo4j.py 2024-12-26 14:55:22 +08:00
Luca Congiu
725d5af215 Refactor code formatting and update requirements for improved clarity and consistency 2024-12-24 09:56:33 +01:00
Luca Congiu
58e74d5fb2 Added Azure OpenAI api sample with streaming 2024-12-23 14:34:34 +01:00
Alex Potapenko
016d9f572d GremlinStorage: fix linting error, use asyncio.gather in get_node_edges() 2024-12-20 09:57:35 +01:00
Alex Potapenko
6f71293c83 Add Gremlin graph storage 2024-12-19 17:47:42 +01:00
Weaxs
344d8f277b support TiDBGraphStorage 2024-12-18 10:57:33 +08:00
Alex Potapenko
7564841450 Add Apache AGE graph storage 2024-12-13 20:41:38 +01:00
LarFii
b7a2d336e6 Update __version__ 2024-12-13 20:15:49 +08:00
Jason Guo
6a0e9c6c77 Modify the chat_complete method to support keywords extraction. 2024-12-13 16:18:33 +08:00
Weaxs
8ef5a6b8cd support TiDB: add TiDBKVStorage, TiDBVectorDBStorage 2024-12-11 16:23:50 +08:00
zrguo
7b0f3ffcda
Merge branch 'main' into main 2024-12-09 17:55:56 +08:00
zrguo
113a7f4a71
Merge pull request #430 from zhenya-zhu/ollama-api-service-demo
Add a ollama API service demo
2024-12-09 17:47:37 +08:00
zrguo
71af34196f
Merge branch 'main' into fix-entity-name-string 2024-12-09 17:30:40 +08:00
I561043
a2f3de25f8 fix format 2024-12-09 17:06:52 +08:00
I561043
9fef519caf add demo for ollama api service 2024-12-09 15:39:34 +08:00
partoneplay
a7fcb653e3 Merge remote-tracking branch 'origin/main' and fix syntax 2024-12-09 12:36:55 +08:00
Larfii
d8edc915e7 Move jina demo 2024-12-09 11:19:57 +08:00
Kaushik Acharya
aca80fe981 Interactive Graph: Mouse hover nodes and edges displays description in pop-up window 2024-12-08 21:46:33 +05:30
partoneplay
a8e09ba6c5 Add support for OpenAI Compatible Streaming output 2024-12-07 14:53:15 +08:00
partoneplay
e82d13e182 Add support for Ollama streaming output and integrate Open-WebUI as the chat UI demo 2024-12-06 10:13:16 +08:00
partoneplay
335179196a Add support for Ollama streaming output and integrate Open-WebUI as the chat UI demo 2024-12-06 10:13:16 +08:00
magicyuan876
84e3b9e44b feat(lightrag): 添加 查询时使用embedding缓存功能
- 在 LightRAG 类中添加 embedding_cache_config配置项
- 实现基于 embedding 相似度的缓存查询和存储
- 添加量化和反量化函数,用于压缩 embedding 数据
- 新增示例演示 embedding 缓存的使用
2024-12-06 08:18:09 +08:00
yuanxiaobin
c19c516792 feat(lightrag): 添加 查询时使用embedding缓存功能
- 在 LightRAG 类中添加 embedding_cache_config配置项
- 实现基于 embedding 相似度的缓存查询和存储
- 添加量化和反量化函数,用于压缩 embedding 数据
- 新增示例演示 embedding 缓存的使用
2024-12-06 08:18:09 +08:00
magicyuan876
d48c6e4588 feat(lightrag): 添加 查询时使用embedding缓存功能
- 在 LightRAG 类中添加 embedding_cache_config配置项
- 实现基于 embedding 相似度的缓存查询和存储
- 添加量化和反量化函数,用于压缩 embedding 数据
- 新增示例演示 embedding 缓存的使用
2024-12-06 08:17:20 +08:00
yuanxiaobin
525c971a23 feat(lightrag): 添加 查询时使用embedding缓存功能
- 在 LightRAG 类中添加 embedding_cache_config配置项
- 实现基于 embedding 相似度的缓存查询和存储
- 添加量化和反量化函数,用于压缩 embedding 数据
- 新增示例演示 embedding 缓存的使用
2024-12-06 08:17:20 +08:00
Larfii
9af3676991 Fix JSON parsing error 2024-12-05 18:26:55 +08:00
Larfii
5e1f317264 Fix JSON parsing error 2024-12-05 18:26:55 +08:00