mirror of
https://github.com/deepset-ai/haystack.git
synced 2026-02-07 07:22:03 +00:00
107 lines
4.7 KiB
Plaintext
107 lines
4.7 KiB
Plaintext
---
|
||
title: "OpenRouterChatGenerator"
|
||
id: openrouterchatgenerator
|
||
slug: "/openrouterchatgenerator"
|
||
description: "This component enables chat completion with any model hosted on [OpenRouter](https://openrouter.ai/)."
|
||
---
|
||
|
||
# OpenRouterChatGenerator
|
||
|
||
This component enables chat completion with any model hosted on [OpenRouter](https://openrouter.ai/).
|
||
|
||
| | |
|
||
| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
|
||
| **Most common position in a pipeline** | After a [ChatPromptBuilder](../builders/chatpromptbuilder.mdx) |
|
||
| **Mandatory init variables** | “api_key”: An OpenRouter API key. Can be set with `OPENROUTER_API_KEY` env variable or passed to `init()` method. |
|
||
| **Mandatory run variables** | “messages:” A list of [ChatMessage](doc:chatmessage) objects |
|
||
| **Output variables** | “replies”: A list of [ChatMessage](doc:chatmessage) objects |
|
||
| **API reference** | [OpenRouter](/reference/integrations-openrouter) |
|
||
| **GitHub link** | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/openrouter |
|
||
|
||
## Overview
|
||
|
||
The `OpenRouterChatGenerator` enables you to use models from multiple providers (such as `openai/gpt-4o`, `anthropic/claude-3.5-sonnet`, and others) by making chat completion calls to the [OpenRouter API](https://openrouter.ai/docs/quickstart).
|
||
|
||
This generator also supports OpenRouter-specific features such as:
|
||
|
||
- Provider routing and model fallback that are configurable with the `generation_kwargs` parameter during initialization or runtime.
|
||
- Custom HTTP headers that can be supplied using the `extra_headers` parameter.
|
||
|
||
This component uses the same `ChatMessage` format as other Haystack Chat Generators for structured input and output. For more information, see the [ChatMessage documentation](doc:chatmessage).
|
||
|
||
It is also fully compatible with Haystack [Tools](../../tools/tool.mdx) and [Toolsets](../../tools/toolset.mdx) that allow function-calling capabilities with supported models.
|
||
|
||
### Initialization
|
||
|
||
To use this integration, you must have an active OpenRouter subscription with sufficient credits and an API key. You can provide it with the `OPENROUTER_API_KEY` environment variable or by using a [Secret](/docs/secret-management).
|
||
|
||
Then, install the `openrouter-haystack` integration:
|
||
|
||
```shell
|
||
pip install openrouter-haystack
|
||
```
|
||
|
||
### Streaming
|
||
|
||
`OpenRouterChatGenerator` supports [streaming](/docs/choosing-the-right-generator#streaming-support) responses from the LLM, allowing tokens to be emitted as they are generated. To enable streaming, pass a callable to the `streaming_callback` parameter during initialization.
|
||
|
||
## Usage
|
||
|
||
### On its own
|
||
|
||
```python
|
||
from haystack.dataclasses import ChatMessage
|
||
from haystack_integrations.components.generators.openrouter import OpenRouterChatGenerator
|
||
|
||
client = OpenRouterChatGenerator()
|
||
response = client.run(
|
||
[ChatMessage.from_user("What are Agentic Pipelines? Be brief.")]
|
||
)
|
||
print(response["replies"][0].text)
|
||
```
|
||
|
||
With streaming and model routing:
|
||
|
||
```python
|
||
from haystack.dataclasses import ChatMessage
|
||
from haystack_integrations.components.generators.openrouter import OpenRouterChatGenerator
|
||
|
||
client = OpenRouterChatGenerator(model="openrouter/auto",
|
||
streaming_callback=lambda chunk: print(chunk.content, end="", flush=True))
|
||
|
||
response = client.run(
|
||
[ChatMessage.from_user("What are Agentic Pipelines? Be brief.")]
|
||
)
|
||
|
||
## check the model used for the response
|
||
print("\n\n Model used: ", response["replies"][0].meta["model"])
|
||
```
|
||
|
||
### In a pipeline
|
||
|
||
```python
|
||
from haystack import Pipeline
|
||
from haystack.components.builders import ChatPromptBuilder
|
||
from haystack.dataclasses import ChatMessage
|
||
from haystack_integrations.components.generators.openrouter import OpenRouterChatGenerator
|
||
|
||
prompt_builder = ChatPromptBuilder()
|
||
llm = OpenRouterChatGenerator(model="openai/gpt-4o-mini")
|
||
|
||
pipe = Pipeline()
|
||
pipe.add_component("builder", prompt_builder)
|
||
pipe.add_component("llm", llm)
|
||
pipe.connect("builder.prompt", "llm.messages")
|
||
|
||
messages = [
|
||
ChatMessage.from_system("Give brief answers."),
|
||
ChatMessage.from_user("Tell me about {{city}}")
|
||
]
|
||
|
||
response = pipe.run(
|
||
data={"builder": {"template": messages,
|
||
"template_variables": {"city": "Berlin"}}}
|
||
)
|
||
print(response)
|
||
```
|