161 Commits

Author SHA1 Message Date
Yannick Stephan
eaf1d553d2 improved typing 2025-02-15 22:37:12 +01:00
Yannick Stephan
621540a54e cleaned code 2025-02-15 00:23:14 +01:00
Saifeddine ALOUI
34018cb1e0 Separated llms from the main llm.py file and fixed some deprication bugs 2025-01-25 00:11:00 +01:00
zrguo
326057deeb fixed linting 2025-01-20 17:28:40 +08:00
Saifeddine ALOUI
501610c603
Update llm.py 2025-01-20 10:04:17 +01:00
Saifeddine ALOUI
a557878e4e
Update llm.py 2025-01-20 09:04:32 +01:00
Saifeddine ALOUI
e4945c9653
Update llm.py 2025-01-20 08:58:08 +01:00
Saifeddine ALOUI
f18f484a87
Update llm.py 2025-01-20 08:54:18 +01:00
zrguo
f8ba76f7b8
Merge branch 'main' into main 2025-01-20 12:26:01 +08:00
Saifeddine ALOUI
70425b0357 fixed linting 2025-01-20 00:26:28 +01:00
Saifeddine ALOUI
9cae05e1ff Fixed a bug introduced by a modification by someone else in azure_openai_complete (please make sure you test before commiting code)
Added api_key to lollms, ollama, openai for both llm and embedding bindings allowing to use api key protected services.
2025-01-19 23:24:37 +01:00
yangdx
347843d545 Use LLM_MODEL env var in Azure OpenAI function
- Remove model parameter from azure_openai_complete (all LLM complete functions must have the same parameter structure)
- Use LLM_MODEL env var in Azure OpenAI function
- Comment out Lollms example in .env.example (duplication with Ollama example)
2025-01-19 14:04:03 +08:00
Nick French
df69d386c5 Fixes #596 - Hardcoded model deployment name in azure_openai_complete
Fixes #596

Update `azure_openai_complete` function to accept a model parameter with a default value of 'gpt-4o-mini'.

* Modify the function signature of `azure_openai_complete` to include a `model` parameter with a default value of 'gpt-4o-mini'.
* Pass the `model` parameter to the `azure_openai_complete_if_cache` function instead of the hardcoded model name 'conversation-4o-mini'.

---

For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/HKUDS/LightRAG/issues/596?shareId=XXXX-XXXX-XXXX-XXXX).
2025-01-17 12:10:26 -05:00
luohuanhuan2019
98f5d7c596 Prompt words to keep the pronunciation consistent 2025-01-16 21:50:43 +08:00
luohuanhuan2019
36c7abf358 Remove garbled characters from system prompt words 2025-01-16 21:35:37 +08:00
Saifeddine ALOUI
224fce9b1b run precommit to fix linting issues 2025-01-11 01:37:07 +01:00
Saifeddine ALOUI
a619b01064 Next test of timeout 2025-01-10 22:17:13 +01:00
Saifeddine ALOUI
adb288c5bb added timeout 2025-01-10 21:39:25 +01:00
chenzihong
648645ef45 fix: fix formatting issues 2024-12-31 01:33:14 +08:00
chenzihong
eb1fc0dae7 fix: change exception type 2024-12-30 01:46:15 +08:00
Luca Congiu
725d5af215 Refactor code formatting and update requirements for improved clarity and consistency 2024-12-24 09:56:33 +01:00
Luca Congiu
58e74d5fb2 Added Azure OpenAI api sample with streaming 2024-12-23 14:34:34 +01:00
Saifeddine ALOUI
469fa9f574 Added lollms integration with lightrag
Removed a depricated function from ollamaserver
2024-12-22 00:38:38 +01:00
LarFii
b7a2d336e6 Update __version__ 2024-12-13 20:15:49 +08:00
Jason Guo
e64cf5068f Fix import 2024-12-13 19:57:25 +08:00
Jason Guo
6a0e9c6c77 Modify the chat_complete method to support keywords extraction. 2024-12-13 16:18:33 +08:00
zrguo
7fbd9aa3e0
Merge pull request #444 from davidleon/fix/lazy_import
Fix/lazy import
2024-12-11 14:19:48 +08:00
Magic_yuan
9a2afc9484 style(lightrag): 调整代码格式 2024-12-11 14:06:55 +08:00
Magic_yuan
0a41cc8a9a feat(llm, prompt):增加日志输出并扩展实体类型
- 在 llm.py 中添加了日志输出,用于调试和记录 LLM 查询输入
- 在 prompt.py 中增加了 "category" 实体类型,扩展了实体提取的范围
2024-12-11 12:45:10 +08:00
david
21a3992e39 fix extra keyword_extraction. 2024-12-10 09:52:27 +08:00
Ikko Eltociear Ashimine
b8cddb6c72
chore: update llm.py
intialize -> initialize
2024-12-09 22:08:06 +09:00
david
3210c8f5bd fix unicode_escape. 2024-12-09 19:14:27 +08:00
zrguo
4c89a1a620
Merge pull request #429 from davidleon/improvement/lazy_external_load
fix extra kwargs error: keyword_extraction.
2024-12-09 18:07:30 +08:00
zrguo
7b0f3ffcda
Merge branch 'main' into main 2024-12-09 17:55:56 +08:00
Larfii
2ba20910bb fix naive_query 2024-12-09 17:45:01 +08:00
zrguo
71af34196f
Merge branch 'main' into fix-entity-name-string 2024-12-09 17:30:40 +08:00
Larfii
ffa95e0461 Fix jina embedding 2024-12-09 17:05:17 +08:00
david
9717ad87fc fix extra kwargs error: keyword_extraction.
add lazy_external_load to reduce external lib deps whenever it's not necessary for user.
2024-12-09 15:35:35 +08:00
partoneplay
a7fcb653e3 Merge remote-tracking branch 'origin/main' and fix syntax 2024-12-09 12:36:55 +08:00
zrguo
0a8d88212a
Merge pull request #423 from davidleon/feature/jina_embedding
add jina embedding
2024-12-09 10:18:50 +08:00
david
97d1894077 add jina embedding 2024-12-08 22:20:41 +08:00
Magic_yuan
779ed604d8 清理多余注释 2024-12-08 17:38:49 +08:00
Magic_yuan
39c2cb11f3 清理多余注释 2024-12-08 17:37:58 +08:00
Magic_yuan
ccf44dc334 feat(cache): 增加 LLM 相似性检查功能并优化缓存机制
- 在 embedding 缓存配置中添加 use_llm_check 参数
- 实现 LLM 相似性检查逻辑,作为缓存命中的二次验证- 优化 naive 模式的缓存处理流程
- 调整缓存数据结构,移除不必要的 model 字段
2024-12-08 17:35:52 +08:00
partoneplay
a8e09ba6c5 Add support for OpenAI Compatible Streaming output 2024-12-07 14:53:15 +08:00
partoneplay
50a17bb4f9 delete unreachable code 2024-12-07 14:53:15 +08:00
magicyuan876
4da7dd1865 移除kwargs中的hashing_kv参数取为变量 2024-12-06 15:35:09 +08:00
yuanxiaobin
6a010abb62 移除kwargs中的hashing_kv参数取为变量 2024-12-06 15:35:09 +08:00
magicyuan876
efdd4b8b8e 移除kwargs中的hashing_kv参数取为变量 2024-12-06 15:23:18 +08:00
yuanxiaobin
a1c4a036fd 移除kwargs中的hashing_kv参数取为变量 2024-12-06 15:23:18 +08:00