This parameter is no longer used. Its removal simplifies the API and clarifies that token length management is handled by upstream text chunking logic rather than the embedding wrapper.
Fix line length
Create binding_options.py
Remove test property
Add dynamic binding options to CLI and environment config
Automatically generate command-line arguments and environment variable
support for all LLM provider bindings using BindingOptions. Add sample
.env generation and extensible framework for new providers.
Add example option definitions and fix test arg check in OllamaOptions
Add options_dict method to BindingOptions for argument parsing
Add comprehensive Ollama binding configuration options
ruff formatting Apply ruff formatting to binding_options.py
Add Ollama separate options for embedding and LLM
Refactor Ollama binding options and fix class var handling
The changes improve how class variables are handled in binding options
and better organize the Ollama-specific options into LLM and embedding
subclasses.
Fix typo in arg test.
Rename cls parameter to klass to avoid keyword shadowing
Fix Ollama embedding binding name typo
Fix ollama embedder context param name
Split Ollama options into LLM and embedding configs with mixin base
Add Ollama option configuration to LLM and embeddings in lightrag_server
Update sample .env generation and environment handling
Conditionally add env vars and cmdline options only when ollama bindings
are used. Add example env file for Ollama binding options.
- Add ollama_server_infos attribute to LightRAG class with default initialization
- Move default values to constants.py for centralized configuration
- Refactor OllamaServerInfos class with property accessors and CLI support
- Update OllamaAPI to get configuration through rag object instead of direct import
- Add command line arguments for simulated model name and tag
- Fix type imports to avoid circular dependencies
This commit renames the parameter 'llm_model_max_token_size' to 'summary_max_tokens' for better clarity, as it specifically controls the token limit for entity relation summaries.
- Add 9 environment variables to /health endpoint configuration section
- Centralize default constants in lightrag/constants.py for consistency
- Update config.py to use centralized defaults for better maintainability
This commit refactors query parameter management by consolidating settings like `top_k`, token limits, and thresholds into the `LightRAG` class, and consistently sourcing parameters from a single location.
- Remove MAX_TOKEN_SUMMARY parameter and related configurations
- Eliminate forced token-based truncation in entity/relationship descriptions
- Switch to fragment-count based summarization logic using FORCE_LLM_SUMMARY_ON_MERGE
- Update FORCE_LLM_SUMMARY_ON_MERGE default from 6 to 4 for better summarization
- Clean up documentation, environment examples, and API display code
- Preserve backward compatibility by graceful parameter removal
This change resolves issues where LLMs were forcibly truncating entity relationship
descriptions mid-sentence, leading to incomplete and potentially inaccurate knowledge
graph content. The new approach allows LLMs to generate complete descriptions while
still providing summarization when multiple fragments need to be merged.
Breaking Change: None - parameter removal is backward compatible
Fixes: Entity relationship description truncation issues