Logo
Explore Help
Register Sign In
yujunjun/LightRAG
1
0
Fork 0
You've already forked LightRAG
mirror of https://github.com/HKUDS/LightRAG.git synced 2025-11-20 12:03:45 +00:00
Code Issues Packages Projects Releases Wiki Activity
LightRAG/examples/unofficial-sample
History
yangdx 598eecd06d Refactor: Rename llm_model_max_token_size to summary_max_tokens
This commit renames the parameter 'llm_model_max_token_size' to 'summary_max_tokens' for better clarity, as it specifically controls the token limit for entity relation summaries.
2025-07-28 00:49:08 +08:00
..
copy_llm_cache_to_another_storage.py
feat: Flatten LLM cache structure for improved recall efficiency
2025-07-02 16:11:53 +08:00
lightrag_bedrock_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_cloudflare_demo.py
Refactor: Rename llm_model_max_token_size to summary_max_tokens
2025-07-28 00:49:08 +08:00
lightrag_hf_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_llamaindex_direct_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_llamaindex_litellm_demo.py
feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
2025-05-20 17:47:05 +02:00
lightrag_llamaindex_litellm_opik_demo.py
feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
2025-05-20 17:47:05 +02:00
lightrag_lmdeploy_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_nvidia_demo.py
Remove deprected demo code
2025-05-14 01:56:26 +08:00
lightrag_openai_neo4j_milvus_redis_demo.py
Refactor: Rename llm_model_max_token_size to summary_max_tokens
2025-07-28 00:49:08 +08:00
Powered by Gitea Version: 1.23.5 Page: 483ms Template: 37ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API