mirror of
https://github.com/deepset-ai/haystack.git
synced 2026-02-08 07:52:48 +00:00
93 lines
3.8 KiB
Plaintext
93 lines
3.8 KiB
Plaintext
---
|
||
title: "OpenAITextEmbedder"
|
||
id: openaitextembedder
|
||
slug: "/openaitextembedder"
|
||
description: "OpenAITextEmbedder transforms a string into a vector that captures its semantics using an OpenAI embedding model."
|
||
---
|
||
|
||
# OpenAITextEmbedder
|
||
|
||
OpenAITextEmbedder transforms a string into a vector that captures its semantics using an OpenAI embedding model.
|
||
|
||
When you perform embedding retrieval, you use this component to transform your query into a vector. Then, the embedding Retriever looks for similar or relevant documents.
|
||
|
||
| | |
|
||
| --- | --- |
|
||
| **Most common position in a pipeline** | Before an embedding [Retriever](../retrievers.mdx) in a query/RAG pipeline |
|
||
| **Mandatory init variables** | "api_key": An OpenAI API key. Can be set with `OPENAI_API_KEY` env var. |
|
||
| **Mandatory run variables** | "text": A string |
|
||
| **Output variables** | "embedding": A list of float numbers <br /> <br />"meta": A dictionary of metadata |
|
||
| **API reference** | [Embedders](/reference/embedders-api) |
|
||
| **GitHub link** | https://github.com/deepset-ai/haystack/blob/main/haystack/components/embedders/openai_text_embedder.py |
|
||
|
||
## Overview
|
||
|
||
To see the list of compatible OpenAI embedding models, head over to OpenAI [documentation](https://platform.openai.com/docs/guides/embeddings/embedding-models). The default model for `OpenAITextEmbedder` is `text-embedding-ada-002`. You can specify another model with the `model` parameter when initializing this component.
|
||
|
||
Use `OpenAITextEmbedder` to embed a simple string (such as a query) into a vector. For embedding lists of documents, use the [OpenAIDocumentEmbedder](/docs/openaidocumentembedder), which enriches the document with the computed embedding, also known as vector.
|
||
|
||
The component uses an `OPENAI_API_KEY` environment variable by default. Otherwise, you can pass an API key at initialization with `api_key`:
|
||
|
||
```python
|
||
embedder = OpenAITextEmbedder(api_key=Secret.from_token("<your-api-key>"))
|
||
```
|
||
|
||
## Usage
|
||
|
||
### On its own
|
||
|
||
Here is how you can use the component on its own:
|
||
|
||
```python
|
||
from haystack.components.embedders import OpenAITextEmbedder
|
||
|
||
text_to_embed = "I love pizza!"
|
||
|
||
text_embedder = OpenAITextEmbedder(api_key=Secret.from_token("<your-api-key>"))
|
||
|
||
print(text_embedder.run(text_to_embed))
|
||
|
||
## {'embedding': [0.017020374536514282, -0.023255806416273117, ...],
|
||
## 'meta': {'model': 'text-embedding-ada-002-v2',
|
||
## 'usage': {'prompt_tokens': 4, 'total_tokens': 4}}}
|
||
```
|
||
|
||
:::note
|
||
We recommend setting OPENAI_API_KEY as an environment variable instead of setting it as a parameter.
|
||
|
||
:::
|
||
|
||
### In a pipeline
|
||
|
||
```python
|
||
from haystack import Document
|
||
from haystack import Pipeline
|
||
from haystack.document_stores.in_memory import InMemoryDocumentStore
|
||
from haystack.components.embedders import OpenAITextEmbedder, OpenAIDocumentEmbedder
|
||
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever
|
||
|
||
document_store = InMemoryDocumentStore(embedding_similarity_function="cosine")
|
||
|
||
documents = [Document(content="My name is Wolfgang and I live in Berlin"),
|
||
Document(content="I saw a black horse running"),
|
||
Document(content="Germany has many big cities")]
|
||
|
||
document_embedder = OpenAIDocumentEmbedder()
|
||
documents_with_embeddings = document_embedder.run(documents)['documents']
|
||
document_store.write_documents(documents_with_embeddings)
|
||
|
||
query_pipeline = Pipeline()
|
||
query_pipeline.add_component("text_embedder", OpenAITextEmbedder())
|
||
query_pipeline.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store))
|
||
query_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
|
||
|
||
query = "Who lives in Berlin?"
|
||
|
||
result = query_pipeline.run({"text_embedder":{"text": query}})
|
||
|
||
print(result['retriever']['documents'][0])
|
||
|
||
## Document(id=..., mimetype: 'text/plain',
|
||
## text: 'My name is Wolfgang and I live in Berlin')
|
||
```
|