Daria Fokina 3e81ec75dc
docs: add 2.18 and 2.19 actual documentation pages (#9946)
* versioned-docs

* external-documentstores
2025-10-27 13:03:22 +01:00

117 lines
5.0 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: "MistralChatGenerator"
id: mistralchatgenerator
slug: "/mistralchatgenerator"
description: "This component enables chat completion using Mistrals text generation models."
---
# MistralChatGenerator
This component enables chat completion using Mistrals text generation models.
| | |
| --- | --- |
| **Most common position in a pipeline** | After a [ChatPromptBuilder](../builders/chatpromptbuilder.mdx) |
| **Mandatory init variables** | "api_key": The Mistral API key. Can be set with `MISTRAL_API_KEY` env var. |
| **Mandatory run variables** | “messages” A list of [`ChatMessage`](/docs/data-classes#chatmessage) objects |
| **Output variables** | "replies": A list of [`ChatMessage`](/docs/data-classes#chatmessage) objects <br /> <br />”meta”: A list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and so on |
| **API reference** | [Mistral](/reference/integrations-mistral) |
| **GitHub link** | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/mistral |
## Overview
This integration supports Mistrals models provided through the generative endpoint. For a full list of available models, check out the [Mistral documentation](https://docs.mistral.ai/platform/endpoints/#generative-endpoints).
`MistralChatGenerator` needs a Mistral API key to work. You can write this key in:
- The `api_key` init parameter using [Secret API](/docs/secret-management)
- The `MISTRAL_API_KEY` environment variable (recommended)
Currently, available models are:
- `mistral-tiny` (default)
- `mistral-small`
- `mistral-medium`(soon to be deprecated)
- `mistral-large-latest`
- `codestral-latest`
This component needs a list of [`ChatMessage`](/docs/data-classes#chatmessage) objects to operate. `ChatMessage` is a data class that contains a message, a role (who generated the message, such as `user`, `assistant`, `system`, `function`), and optional metadata.
Refer to the [Mistral API documentation](https://docs.mistral.ai/api/#operation/createChatCompletion) for more details on the parameters supported by the Mistral API, which you can provide with `generation_kwargs` when running the component.
### Streaming
This Generator supports [streaming](/docs/choosing-the-right-generator#streaming-support) the tokens from the LLM directly in output. To do so, pass a function to the `streaming_callback` init parameter.
## Usage
Install the `mistral-haystack` package to use the `MistralChatGenerator`:
```shell
pip install mistral-haystack
```
#### On its own
```python
from haystack_integrations.components.generators.mistral import MistralChatGenerator
from haystack.components.generators.utils import print_streaming_chunk
from haystack.dataclasses import ChatMessage
from haystack.utils import Secret
generator = MistralChatGenerator(api_key=Secret.from_env_var("MISTRAL_API_KEY"), streaming_callback=print_streaming_chunk)
message = ChatMessage.from_user("What's Natural Language Processing? Be brief.")
print(generator.run([message]))
```
#### In a Pipeline
Below is an example RAG Pipeline where we answer questions based on the URL contents. We add the contents of the URL into our `messages` in the `ChatPromptBuilder` and generate an answer with the `MistralChatGenerator`.
```python
from haystack import Document
from haystack import Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.components.generators.utils import print_streaming_chunk
from haystack.components.fetchers import LinkContentFetcher
from haystack.components.converters import HTMLToDocument
from haystack.dataclasses import ChatMessage
from haystack_integrations.components.generators.mistral import MistralChatGenerator
fetcher = LinkContentFetcher()
converter = HTMLToDocument()
prompt_builder = ChatPromptBuilder(variables=["documents"])
llm = MistralChatGenerator(streaming_callback=print_streaming_chunk, model='mistral-small')
message_template = """Answer the following question based on the contents of the article: {{query}}\n
Article: {{documents[0].content}} \n
"""
messages = [ChatMessage.from_user(message_template)]
rag_pipeline = Pipeline()
rag_pipeline.add_component(name="fetcher", instance=fetcher)
rag_pipeline.add_component(name="converter", instance=converter)
rag_pipeline.add_component("prompt_builder", prompt_builder)
rag_pipeline.add_component("llm", llm)
rag_pipeline.connect("fetcher.streams", "converter.sources")
rag_pipeline.connect("converter.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder.prompt", "llm.messages")
question = "What are the capabilities of Mixtral?"
result = rag_pipeline.run(
{
"fetcher": {"urls": ["https://mistral.ai/news/mixtral-of-experts"]},
"prompt_builder": {"template_variables": {"query": question}, "template": messages},
"llm": {"generation_kwargs": {"max_tokens": 165}},
},
)
```
## Additional References
:cook: Cookbook: [Web QA with Mixtral-8x7B-Instruct-v0.1](https://haystack.deepset.ai/cookbook/mixtral-8x7b-for-web-qa)