
| | |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| CI/CD | [](https://github.com/deepset-ai/haystack/actions/workflows/tests.yml) [](https://github.com/python/mypy) [](https://coveralls.io/github/deepset-ai/haystack?branch=main) [](https://github.com/astral-sh/ruff) |
| Docs | [](https://docs.haystack.deepset.ai) |
| Package | [](https://pypi.org/project/haystack-ai/)   [](https://anaconda.org/conda-forge/haystack-ai) [](LICENSE) [](https://github.com/deepset-ai/haystack/actions/workflows/license_compliance.yml) |
| Meta | [](https://discord.com/invite/xYvH6drSmA) [](https://twitter.com/haystack_ai) |
[Haystack](https://haystack.deepset.ai/) is an end-to-end LLM framework that allows you to build applications powered by
LLMs, Transformer models, vector search and more. Whether you want to perform retrieval-augmented generation (RAG),
document search, question answering or answer generation, Haystack can orchestrate state-of-the-art embedding models
and LLMs into pipelines to build end-to-end NLP applications and solve your use case.
## Installation
The simplest way to get Haystack is via pip:
```sh
pip install haystack-ai
```
Install from the `main` branch to try the newest features:
```sh
pip install git+https://github.com/deepset-ai/haystack.git@main
```
Haystack supports multiple installation methods including Docker images. For a comprehensive guide please refer
to the [documentation](https://docs.haystack.deepset.ai/docs/installation).
## Documentation
If you're new to the project, check out ["What is Haystack?"](https://haystack.deepset.ai/overview/intro) then go
through the ["Get Started Guide"](https://haystack.deepset.ai/overview/quick-start) and build your first LLM application
in a matter of minutes. Keep learning with the [tutorials](https://haystack.deepset.ai/tutorials). For more advanced
use cases, or just to get some inspiration, you can browse our Haystack recipes in the
[Cookbook](https://haystack.deepset.ai/cookbook).
At any given point, hit the [documentation](https://docs.haystack.deepset.ai/docs/intro) to learn more about Haystack, what can it do for you and the technology behind.
## Features
> [!IMPORTANT]
> **You are currently looking at the readme of Haystack 2.0**. We are still maintaining Haystack 1.x to give everyone
> enough time to migrate to 2.0. [Switch to Haystack 1.x here](https://github.com/deepset-ai/haystack/tree/v1.x).
- **Technology agnostic:** Allow users the flexibility to decide what vendor or technology they want and make it easy to switch out any component for another. Haystack allows you to use and compare models available from OpenAI, Cohere and Hugging Face, as well as your own local models or models hosted on Azure, Bedrock and SageMaker.
- **Explicit:** Make it transparent how different moving parts can βtalkβ to each other so it's easier to fit your tech stack and use case.
- **Flexible:** Haystack provides all tooling in one place: database access, file conversion, cleaning, splitting, training, eval, inference, and more. And whenever custom behavior is desirable, it's easy to create custom components.
- **Extensible:** Provide a uniform and easy way for the community and third parties to build their own components and foster an open ecosystem around Haystack.
Some examples of what you can do with Haystack:
- Build **retrieval augmented generation (RAG)** by making use of one of the available vector databases and customizing your LLM interaction, the sky is the limit π
- Perform Question Answering **in natural language** to find granular answers in your documents.
- Perform **semantic search** and retrieve documents according to meaning.
- Build applications that can make complex decisions making to answer complex queries: such as systems that can resolve complex customer queries, do knowledge search on many disconnected resources and so on.
- Scale to millions of docs using retrievers and production-scale components.
- Use **off-the-shelf models** or **fine-tune** them to your data.
- Use **user feedback** to evaluate, benchmark, and continuously improve your models.
> [!TIP]
>