Daria Fokina 3e81ec75dc
docs: add 2.18 and 2.19 actual documentation pages (#9946)
* versioned-docs

* external-documentstores
2025-10-27 13:03:22 +01:00

129 lines
6.5 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: "SentenceTransformersDocumentEmbedder"
id: sentencetransformersdocumentembedder
slug: "/sentencetransformersdocumentembedder"
description: "SentenceTransformersDocumentEmbedder computes the embeddings of a list of documents and stores the obtained vectors in the embedding field of each document. It uses embedding models compatible with the Sentence Transformers library."
---
# SentenceTransformersDocumentEmbedder
SentenceTransformersDocumentEmbedder computes the embeddings of a list of documents and stores the obtained vectors in the embedding field of each document. It uses embedding models compatible with the Sentence Transformers library.
The vectors computed by this component are necessary to perform embedding retrieval on a collection of documents. At retrieval time, the vector that represents the query is compared with those of the documents to find the most similar or relevant documents.
| | |
| :------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------- |
| **Most common position in a pipeline** | Before a [`DocumentWriter`](../writers/documentwriter.mdx) in an indexing pipeline |
| **Mandatory run variables** | "documents": A list of documents |
| **Output variables** | "documents": A list of documents (enriched with embeddings) |
| **API reference** | [Embedders](/reference/embedders-api) |
| **GitHub link** | https://github.com/deepset-ai/haystack/blob/main/haystack/components/embedders/sentence_transformers_document_embedder.py |
## Overview
`SentenceTransformersDocumentEmbedder` should be used to embed a list of documents. To embed a string, use the [SentenceTransformersTextEmbedder](/docs/sentencetransformerstextembedder).
### Authentication
Authentication with a Hugging Face API Token is only required to access private or gated models through Serverless Inference API or the Inference Endpoints.
The component uses an `HF_API_TOKEN` or `HF_TOKEN` environment variable, or you can pass a Hugging Face API token at initialization. See our [Secret Management](doc:secret-management) page for more information.
```python
document_embedder = SentenceTransformersDocumentEmbedder(token=Secret.from_token("<your-api-key>"))
```
### Compatible Models
The default embedding model is [\`sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)\`. You can specify another model with the `model` parameter when initializing this component.
See the original models in the Sentence Transformers [documentation](https://www.sbert.net/docs/pretrained_models.html).
Nowadays, most of the models in the [Massive Text Embedding Benchmark (MTEB) Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) are compatible with Sentence Transformers.
You can look for compatibility in the model card: [an example related to BGE models](https://huggingface.co/BAAI/bge-large-en-v1.5#using-sentence-transformers).
### Instructions
Some recent models that you can find in MTEB require prepending the text with an instruction to work better for retrieval.
For example, if you use [intfloat/e5-large-v2](https://huggingface.co/BAAI/bge-large-en-v1.5#model-list), you should prefix your document with the following instruction: “passage:”
This is how it works with `SentenceTransformersDocumentEmbedder`:
```python
embedder = SentenceTransformersDocumentEmbedder(model="intfloat/e5-large-v2",
prefix="passage")
```
### Embedding Metadata
Text documents often come with a set of metadata. If they are distinctive and semantically meaningful, you can embed them along with the text of the document to improve retrieval.
You can do this easily by using the Document Embedder:
```python
from haystack import Document
from haystack.components.embedders import SentenceTransformersDocumentEmbedder
doc = Document(content="some text",
meta={"title": "relevant title",
"page number": 18})
embedder = SentenceTransformersDocumentEmbedder(meta_fields_to_embed=["title"])
docs_w_embeddings = embedder.run(documents=[doc])["documents"]
```
## Usage
### On its own
```python
from haystack import Document
from haystack.components.embedders import SentenceTransformersDocumentEmbedder
doc = Document(content="I love pizza!")
doc_embedder = SentenceTransformersDocumentEmbedder()
doc_embedder.warm_up()
result = doc_embedder.run([doc])
print(result['documents'][0].embedding)
## [-0.07804739475250244, 0.1498992145061493, ...]
```
### In a pipeline
```python
from haystack import Document
from haystack import Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.embedders import SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder
from haystack.components.writers import DocumentWriter
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever
document_store = InMemoryDocumentStore(embedding_similarity_function="cosine")
documents = [Document(content="My name is Wolfgang and I live in Berlin"),
Document(content="I saw a black horse running"),
Document(content="Germany has many big cities")]
indexing_pipeline = Pipeline()
indexing_pipeline.add_component("embedder", SentenceTransformersDocumentEmbedder())
indexing_pipeline.add_component("writer", DocumentWriter(document_store=document_store))
indexing_pipeline.connect("embedder", "writer")
query_pipeline = Pipeline()
query_pipeline.add_component("text_embedder", SentenceTransformersTextEmbedder())
query_pipeline.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store))
query_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
query = "Who lives in Berlin?"
indexing_pipeline.run({"documents": documents})
result = query_pipeline.run({"text_embedder":{"text": query}})
print(result['retriever']['documents'][0])
## Document(id=..., mimetype: 'text/plain',
## text: 'My name is Wolfgang and I live in Berlin')
```