90 Commits

Author SHA1 Message Date
yangdx
4b2ef71c25 feat: Add extra_body parameter support for OpenRouter/vLLM compatibility
- Enhanced add_args function to handle dict types with JSON parsing
- Added reasoning and extra_body parameters for OpenRouter/vLLM compatibility
- Updated env.example with OpenRouter/vLLM parameter examples
2025-08-21 13:06:28 +08:00
yangdx
aa22772721 Refactor LLM temperature handling to be provider-specific
• Remove global temperature parameter
• Add provider-specific temp configs
• Update env example with new settings
• Fix Bedrock temperature handling
• Clean up splash screen display
2025-08-20 23:52:33 +08:00
yangdx
df7bcb1e3d Add LLM_TIMEOUT configuration for all LLM providers
- Add LLM_TIMEOUT env variable
- Apply timeout to all LLM bindings
2025-08-20 23:50:57 +08:00
SJ
f7ca9ae16a Ruff formatted 2025-08-15 22:21:34 +00:00
SJ
99643f01de
Enhancement: support aws bedrock as an LLm binding #1733 2025-08-13 02:08:13 -05:00
yangdx
ffb642a5ce Fix linting 2025-08-09 08:41:41 +08:00
yangdx
ecd7777e61 Update OpenAI embedding handling for both list and base64 embeddings
- Fix OpenAI embedding array parsing
- Improve embedding data type safety
2025-08-09 08:40:33 +08:00
yangdx
6ff25210ea feat: improve Jina API error handling to show clean messages instead of HTML 2025-08-05 11:46:02 +08:00
yangdx
c5babf61d7 Feat: Change embedding formats from float to base64 for efficiency
- Add base64 support for Jina embeddings
- Add base64 support for OpenAI embeddings
- Update env.example with new embedding options
2025-08-05 11:38:40 +08:00
yangdx
adf7ec8e35 feat: Add OpenAI LLM Options support with BindingOptions framework
- Add OpenAILLMOptions dataclass with full OpenAI API parameter support
- Integrate OpenAI options in config.py for automatic binding detection
- Update server functions to inject OpenAI options for openai/azure_openai bindings
2025-08-05 03:47:26 +08:00
yangdx
3099748668 Add temperature fallback for Ollama LLM binding
- Implement OLLAMA_LLM_TEMPERATURE env var
- Fallback to global TEMPERATURE if unset
- Remove redundant OllamaLLMOptions logic
- Update env.example with new setting
2025-08-05 01:50:09 +08:00
yangdx
e5e3f0f878 Fix(Ollama option): change stop option from string to list and add fallback global temperature setting 2025-08-04 19:43:14 +08:00
yangdx
f8a880ac66 Improved binding options testing and documentation 2025-08-04 18:21:55 +08:00
yangdx
32af45ff46 refactor: improve JSON parsing reliability with json-repair library
Replace regex-based JSON extraction with json-repair for better handling of malformed LLM responses. Remove deprecated JSON parsing utilities and clean up keyword_extraction parameter across LLM providers.

- Remove locate_json_string_body_from_string() and convert_response_to_json()
- Use json-repair.loads() in extract_keywords_only() for robust parsing
- Clean up LLM interfaces and remove unused parameters
- Add json-repair dependency
2025-08-01 19:36:20 +08:00
yangdx
9d5603d35e Set the default LLM temperature to 1.0 and centralize constant management 2025-07-31 17:15:10 +08:00
administrator
9c3e1505b5 fix timeout issue 2025-07-29 13:38:46 +07:00
yangdx
9923821d75 refactor: Remove deprecated max_token_size from embedding configuration
This parameter is no longer used. Its removal simplifies the API and clarifies that token length management is handled by upstream text chunking logic rather than the embedding wrapper.
2025-07-29 10:49:35 +08:00
yangdx
75d1b1e9f8 Update Ollama context length configuration
- Rename OLLAMA_NUM_CTX to OLLAMA_LLM_NUM_CTX
- Increase default context window size
- Add requirement for minimum context size
- Update documentation examples
2025-07-29 09:53:37 +08:00
Michele Comitini
bd94714b15 options needs to be passed to ollama client embed() method
Fix line length

Create binding_options.py

Remove test property

Add dynamic binding options to CLI and environment config

Automatically generate command-line arguments and environment variable
support for all LLM provider bindings using BindingOptions. Add sample
.env generation and extensible framework for new providers.

Add example option definitions and fix test arg check in OllamaOptions

Add options_dict method to BindingOptions for argument parsing

Add comprehensive Ollama binding configuration options

ruff formatting Apply ruff formatting to binding_options.py

Add Ollama separate options for embedding and LLM

Refactor Ollama binding options and fix class var handling

The changes improve how class variables are handled in binding options
and better organize the Ollama-specific options into LLM and embedding
subclasses.

Fix typo in arg test.

Rename cls parameter to klass to avoid keyword shadowing

Fix Ollama embedding binding name typo

Fix ollama embedder context param name

Split Ollama options into LLM and embedding configs with mixin base

Add Ollama option configuration to LLM and embeddings in lightrag_server

Update sample .env generation and environment handling

Conditionally add env vars and cmdline options only when ollama bindings
are used. Add example env file for Ollama binding options.
2025-07-28 12:05:40 +02:00
yangdx
2767212ba0 Fix linting 2025-07-24 12:25:50 +08:00
yangdx
d979e9078f feat: Integrate Jina embeddings API support
- Implemented Jina embedding function
- Add new EMBEDDING_BINDING type of jina for LightRAG Server
- Add env var sample
2025-07-24 12:15:00 +08:00
Dario Chini
5b28233903 fix Azure deployment 2025-07-17 23:11:07 +02:00
zrguo
e254c3dd81 Update openai.py 2025-07-15 17:30:30 +08:00
yangdx
2a0cff3ed6 Fix linting 2025-07-08 18:17:21 +08:00
Molion Surya
8cbba6e9db Fix #1746: [openai.py logic for streaming complete] 2025-07-08 13:25:52 +08:00
Daniel.y
c740401b7f
Merge pull request #1654 from a-bruhn/azure-env-vars
Clean up azure env vars
2025-06-26 19:11:20 +08:00
zrguo
96b9bd8cc5 fix lint 2025-06-19 14:16:24 +08:00
Alexander Bruhn
5e3970e18b
Resolve confusion between azure embedding and completion environment variables 2025-06-04 14:45:11 +02:00
eddiemaru-101
77399e051f Fix: Increase Ollama timeout values to prevent ReadTimeout errors 2025-05-30 22:43:52 +09:00
yangdx
3b9c28fae9 Fix linting 2025-05-22 10:46:03 +08:00
Martin Perez-Guevara
3d418d95c5 feat: Integrate Opik for Enhanced Observability in LlamaIndex LLM Interactions
This pull request demonstrates how to create a new Opik project when using LiteLLM for LlamaIndex-based LLM calls. The primary goal is to enable detailed tracing, monitoring, and logging of LLM interactions in a new Opik project_name, particularly when using LiteLLM as an API proxy. This enhancement allows for better debugging, performance analysis, observability when using LightRAG with LiteLLM and Opik.

**Motivation:**

As our application's reliance on Large Language Models (LLMs) grows, robust observability becomes crucial for maintaining system health, optimizing performance, and understanding usage patterns. Integrating Opik provides the following key benefits:

1.  **Improved Debugging:** Enables end-to-end tracing of requests through the LlamaIndex and LiteLLM layers, making it easier to identify and resolve issues or performance bottlenecks.
2.  **Comprehensive Performance Monitoring:** Allows for the collection of vital metrics such as LLM call latency, token usage, and error rates. This data can be filtered and analyzed within Opik using project names and tags.
3.  **Effective Cost Management:** Facilitates tracking of token consumption associated with specific requests or projects, leading to better cost control and optimization.
4.  **Deeper Usage Insights:** Provides a clearer understanding of how different components of the application or various projects are utilizing LLM capabilities.

These changes empower developers to seamlessly add observability to their LlamaIndex-based LLM workflows, especially when leveraging LiteLLM, by passing necessary Opik metadata.

**Changes Made:**

1.  **`lightrag/llm/llama_index_impl.py`:**
    *   Modified the `llama_index_complete_if_cache` function:
        *   The `**kwargs` parameter, which previously handled additional arguments, has been refined. A dedicated `chat_kwargs={}` parameter is now used to pass keyword arguments directly to the `model.achat()` method. This change ensures that vendor-specific parameters, such as LiteLLM's `litellm_params` for Opik metadata, are correctly propagated.
        *   The logic for retrieving `llm_instance` from `kwargs` was removed as `model` is now a direct parameter, simplifying the function.
    *   Updated the `llama_index_complete` function:
        *   Ensured that `**kwargs` (which may include `chat_kwargs` or other parameters intended for `llama_index_complete_if_cache`) are correctly passed down.

2.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_demo.py`:**
    *   This existing demo file was updated to align with the changes in `llama_index_impl.py`.
    *   The `llm_model_func` now passes an empty `chat_kwargs={}` by default to `llama_index_complete_if_cache` if no specific chat arguments are needed, maintaining compatibility with the updated function signature. This file serves as a baseline example without Opik integration.

3.  **`examples/unofficial-sample/lightrag_llamaindex_litellm_opik_demo.py` (New File):**
    *   A new example script has been added to specifically demonstrate the integration of LightRAG with LlamaIndex, LiteLLM, and Opik for observability.
    *   The `llm_model_func` in this demo showcases how to construct the `chat_kwargs` dictionary.
    *   It includes `litellm_params` with a `metadata` field for Opik, containing `project_name` and `tags`. This provides a clear example of how to send observability data to Opik.
    *   The call to `llama_index_complete_if_cache` within `llm_model_func` passes these `chat_kwargs`, ensuring Opik metadata is included in the LiteLLM request.

These modifications provide a more robust and extensible way to pass parameters to the underlying LLM calls, specifically enabling the integration of observability tools like Opik.

Co-authored-by: Martin Perez-Guevara <8766915+MartinPerez@users.noreply.github.com>
Co-authored-by: Young Jin Kim <157011356+jidodata-ykim@users.noreply.github.com>
2025-05-20 17:47:05 +02:00
yangdx
29be2aac71 Remove tenacity from dynamic import 2025-05-14 11:30:48 +08:00
yangdx
ac2b6af97e Eliminate tenacity from dynamic import 2025-05-14 10:57:05 +08:00
yangdx
0e26cbebd0 Fix linting 2025-05-14 01:14:45 +08:00
yangdx
b836d02cac Optimize Ollama LLM driver 2025-05-14 01:13:03 +08:00
yangdx
56f82bdcd5 Ensure OpenAI connection is closed after streaming response finished 2025-05-12 17:37:28 +08:00
yangdx
c2938a71a4 Fix streaming problem for OpenAI 2025-05-09 15:54:54 +08:00
Daniel.y
3597239768
Merge pull request #1548 from maharjun/use_openai_context_manager
Use Openai Client Context Manager
2025-05-09 14:33:48 +08:00
Arjun Rao
b7eae4d7c0 Use the context manager for the openai client
This avoids issues of resource cleanup (too many open files) when dealing with massively parallel calls to the openai API since RAII in python is highly unreliable in such contexts.
2025-05-08 11:42:53 +10:00
Balaji Munusamy
c3bc0eb53b made bedrock complete generic 2025-05-02 18:25:48 +02:00
yangdx
34cc8b6a51 Fix linting 2025-04-29 17:52:07 +08:00
yangdx
f58c8276bc fix: correct retry_if_exception_type usage and improve async iterator resource management
- Corrects the syntax of retry_if_exception_type decorators to ensure proper exception handling and retry behavior
- Implements proper resource cleanup for async iterators to prevent memory leaks and potential SIGSEGV errors
2025-04-29 17:43:27 +08:00
yangdx
99522a088d Fix ollama embedding func ruturn data type bugs 2025-04-21 00:01:25 +08:00
yangdx
39540f3f8b Fix linting 2025-04-20 14:33:33 +08:00
yangdx
5f2cd871a8 Update sample code and README 2025-04-20 14:33:16 +08:00
yangdx
a418b18ed1 Fix linting 2025-04-20 11:17:51 +08:00
Enoughappens
704ef16ce3
fix streaming "list index out of range" 2025-04-19 12:57:08 +08:00
yangdx
14b4bc96ce Fix OPENAI_API_BASE not working in .env 2025-04-17 05:20:22 +08:00
Qodi
8f3068f1c0
Update openai.py
增加环境变量,支持OPENAI_API_BASE 支持中转站
2025-04-10 12:10:35 +08:00
zrguo
e17e61f58e fix lint 2025-04-03 14:44:56 +08:00