mirror of
https://github.com/deepset-ai/haystack.git
synced 2025-09-03 05:13:34 +00:00
Update README.md (#5554)
Some minor wording updates to reflect latest use cases and functionality
This commit is contained in:
parent
714b944dc2
commit
c38943721f
10
README.md
10
README.md
@ -9,13 +9,13 @@
|
||||
| Meta |   |
|
||||
</div>
|
||||
|
||||
[Haystack](https://haystack.deepset.ai/) is an end-to-end NLP framework that enables you to build NLP applications powered by LLMs, Transformer models, vector search and more. Whether you want to perform question answering, answer generation, semantic document search, or build tools that are capable of complex decision making and query resolution, you can use the state-of-the-art NLP models with Haystack to build end-to-end NLP applications solving your use case.
|
||||
[Haystack](https://haystack.deepset.ai/) is an end-to-end NLP framework that enables you to build applications powered by LLMs, Transformer models, vector search and more. Whether you want to perform question answering, answer generation, semantic document search, or build tools that are capable of complex decision making and query resolution, you can use the state-of-the-art NLP models with Haystack to build end-to-end NLP applications solving your use case.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
🏃♀️ **[Pipelines](https://docs.haystack.deepset.ai/docs/pipelines):** This is the standard Haystack structure that can connect to your data and perform on it NLP tasks that you define. The data in a Pipeline flows from one Node to the next. You define how Nodes interact with each other, and how one Node pushes data to the next.
|
||||
🏃♀️ **[Pipelines](https://docs.haystack.deepset.ai/docs/pipelines):** This is the standard Haystack structure that builds on top of your data to perform various NLP tasks such as retrieval augmented generation, question answering and more. The data in a Pipeline flows from one Node to the next. You define how Nodes interact with each other, and how one Node pushes data to the next.
|
||||
|
||||
An example pipeline would consist of one `Retriever` Node and one `Reader` Node. When the pipeline runs with a query, the Retriever first retrieves the documents relevant to the query and then the Reader extracts the final answer.
|
||||
An example pipeline would consist of one `Retriever` Node and one `PromptNode`. When the pipeline runs with a query, the Retriever first retrieves the relevant context to the query from your data, and then the PromptNode uses an LLM to generate the final answer.
|
||||
|
||||
⚛️ **[Nodes](https://docs.haystack.deepset.ai/docs/nodes_overview):** Each Node achieves one thing. Such as preprocessing documents, retrieving documents, using language models to answer questions and so on.
|
||||
|
||||
@ -27,8 +27,8 @@ An example pipeline would consist of one `Retriever` Node and one `Reader` Node.
|
||||
|
||||
## What to Build with Haystack
|
||||
|
||||
- Build **retrieval augmented generation (RAG)** by making use of one of the available vector databases and customizing your LLM interaction, the sky is the limit 🚀
|
||||
- Perform Question Answering **in natural language** to find granular answers in your documents.
|
||||
- **Generate answers or content** with the use of LLM such as articles, tweets, product descriptions and more, the sky is the limit 🚀
|
||||
- Perform **semantic search** and retrieve documents according to meaning.
|
||||
- Build applications that can do complex decisions making to answer complex queries: such as systems that can resolve complex customer queries, do knowledge search on many disconnected resources and so on.
|
||||
- Use **off-the-shelf models** or **fine-tune** them to your data.
|
||||
@ -36,7 +36,7 @@ An example pipeline would consist of one `Retriever` Node and one `Reader` Node.
|
||||
|
||||
## Features
|
||||
|
||||
- **Latest models**: Haystack allows you to use and compare models available from OpenAI, Cohere and Hugging Face, as well as your own local models. Use the latest LLMs or Transformer-based models (for example: BERT, RoBERTa, MiniLM).
|
||||
- **Latest models**: Haystack allows you to use and compare models available from OpenAI, Cohere and Hugging Face, as well as your own local models or models hosted on SageMaker. Use the latest LLMs or Transformer-based models (for example: BERT, RoBERTa, MiniLM).
|
||||
- **Modular**: Multiple choices to fit your tech stack and use case. A wide choice of DocumentStores to store your data, file conversion tools and more
|
||||
- **Open**: Integrated with Hugging Face's model hub, OpenAI, Cohere and various Azure services.
|
||||
- **Scalable**: Scale to millions of docs using retrievers and production-scale components like Elasticsearch and a fastAPI REST API.
|
||||
|
Loading…
x
Reference in New Issue
Block a user