mirror of
https://github.com/deepset-ai/haystack.git
synced 2025-07-22 16:31:16 +00:00

* Upgrade pydoc-markdown and fix the YAMLs to work with it * Pin pydoc-markdown to major version * Generalize pydoc-markdown workflow * Make a single Action to perform all tasks that require committing into the local branch * Merge the code updates and the docs in the Linux CI to prevent the bot from always show the pipeline as green * Installing Jupyter deps for Black * Build cache before running generation tasks * Add check not to run the code generation on master * Simplify push action * Add more test deps in setup.cfg and remove from GH Action workflow * Remove forced upgrades on pip install Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
356 lines
10 KiB
Plaintext
356 lines
10 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"# Generative QA with \"Retrieval-Augmented Generation\"\n",
|
|
"\n",
|
|
"[](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial7_RAG_Generator.ipynb)\n",
|
|
"\n",
|
|
"While extractive QA highlights the span of text that answers a query,\n",
|
|
"generative QA can return a novel text answer that it has composed.\n",
|
|
"In this tutorial, you will learn how to set up a generative system using the\n",
|
|
"[RAG model](https://arxiv.org/abs/2005.11401) which conditions the\n",
|
|
"answer generator on a set of retrieved documents."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"### Prepare environment\n",
|
|
"\n",
|
|
"#### Colab: Enable the GPU runtime\n",
|
|
"Make sure you enable the GPU runtime to experience decent speed in this tutorial.\n",
|
|
"**Runtime -> Change Runtime type -> Hardware accelerator -> GPU**\n",
|
|
"\n",
|
|
"<img src=\"https://raw.githubusercontent.com/deepset-ai/haystack/master/docs/_src/img/colab_gpu_runtime.jpg\">"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Make sure you have a GPU running\n",
|
|
"!nvidia-smi"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"Here are the packages and imports that we'll need:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Install the latest release of Haystack in your own environment\n",
|
|
"#! pip install farm-haystack\n",
|
|
"\n",
|
|
"# Install the latest master of Haystack\n",
|
|
"!pip install --upgrade pip\n",
|
|
"!pip install git+https://github.com/deepset-ai/haystack.git#egg=farm-haystack[colab,faiss]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from typing import List\n",
|
|
"import requests\n",
|
|
"import pandas as pd\n",
|
|
"from haystack import Document\n",
|
|
"from haystack.document_stores import FAISSDocumentStore\n",
|
|
"from haystack.nodes import RAGenerator, DensePassageRetriever"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"Let's download a csv containing some sample text and preprocess the data.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Download sample\n",
|
|
"temp = requests.get(\n",
|
|
" \"https://raw.githubusercontent.com/deepset-ai/haystack/master/tutorials/small_generator_dataset.csv\"\n",
|
|
")\n",
|
|
"open(\"small_generator_dataset.csv\", \"wb\").write(temp.content)\n",
|
|
"\n",
|
|
"# Create dataframe with columns \"title\" and \"text\"\n",
|
|
"df = pd.read_csv(\"small_generator_dataset.csv\", sep=\",\")\n",
|
|
"# Minimal cleaning\n",
|
|
"df.fillna(value=\"\", inplace=True)\n",
|
|
"\n",
|
|
"print(df.head())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"We can cast our data into Haystack Document objects.\n",
|
|
"Alternatively, we can also just use dictionaries with \"text\" and \"meta\" fields"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Use data to initialize Document objects\n",
|
|
"titles = list(df[\"title\"].values)\n",
|
|
"texts = list(df[\"text\"].values)\n",
|
|
"documents: List[Document] = []\n",
|
|
"for title, text in zip(titles, texts):\n",
|
|
" documents.append(Document(content=text, meta={\"name\": title or \"\"}))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"Here we initialize the FAISSDocumentStore, DensePassageRetriever and RAGenerator.\n",
|
|
"FAISS is chosen here since it is optimized vector storage."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Initialize FAISS document store.\n",
|
|
"# Set `return_embedding` to `True`, so generator doesn't have to perform re-embedding\n",
|
|
"document_store = FAISSDocumentStore(faiss_index_factory_str=\"Flat\", return_embedding=True)\n",
|
|
"\n",
|
|
"# Initialize DPR Retriever to encode documents, encode question and query documents\n",
|
|
"retriever = DensePassageRetriever(\n",
|
|
" document_store=document_store,\n",
|
|
" query_embedding_model=\"facebook/dpr-question_encoder-single-nq-base\",\n",
|
|
" passage_embedding_model=\"facebook/dpr-ctx_encoder-single-nq-base\",\n",
|
|
" use_gpu=True,\n",
|
|
" embed_title=True,\n",
|
|
")\n",
|
|
"\n",
|
|
"# Initialize RAG Generator\n",
|
|
"generator = RAGenerator(\n",
|
|
" model_name_or_path=\"facebook/rag-token-nq\",\n",
|
|
" use_gpu=True,\n",
|
|
" top_k=1,\n",
|
|
" max_length=200,\n",
|
|
" min_length=2,\n",
|
|
" embed_title=True,\n",
|
|
" num_beams=2,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"We write documents to the DocumentStore, first by deleting any remaining documents then calling `write_documents()`.\n",
|
|
"The `update_embeddings()` method uses the retriever to create an embedding for each document.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Delete existing documents in documents store\n",
|
|
"document_store.delete_documents()\n",
|
|
"\n",
|
|
"# Write documents to document store\n",
|
|
"document_store.write_documents(documents)\n",
|
|
"\n",
|
|
"# Add documents embeddings to index\n",
|
|
"document_store.update_embeddings(retriever=retriever)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"Here are our questions:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"QUESTIONS = [\n",
|
|
" \"who got the first nobel prize in physics\",\n",
|
|
" \"when is the next deadpool movie being released\",\n",
|
|
" \"which mode is used for short wave broadcast service\",\n",
|
|
" \"who is the owner of reading football club\",\n",
|
|
" \"when is the next scandal episode coming out\",\n",
|
|
" \"when is the last time the philadelphia won the superbowl\",\n",
|
|
" \"what is the most current adobe flash player version\",\n",
|
|
" \"how many episodes are there in dragon ball z\",\n",
|
|
" \"what is the first step in the evolution of the eye\",\n",
|
|
" \"where is gall bladder situated in human body\",\n",
|
|
" \"what is the main mineral in lithium batteries\",\n",
|
|
" \"who is the president of usa right now\",\n",
|
|
" \"where do the greasers live in the outsiders\",\n",
|
|
" \"panda is a national animal of which country\",\n",
|
|
" \"what is the name of manchester united stadium\",\n",
|
|
"]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"Now let's run our system!\n",
|
|
"The retriever will pick out a small subset of documents that it finds relevant.\n",
|
|
"These are used to condition the generator as it generates the answer.\n",
|
|
"What it should return then are novel text spans that form and answer to your question!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": false,
|
|
"pycharm": {
|
|
"name": "#%%\n"
|
|
}
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Or alternatively use the Pipeline class\n",
|
|
"from haystack.pipelines import GenerativeQAPipeline\n",
|
|
"from haystack.utils import print_answers\n",
|
|
"\n",
|
|
"pipe = GenerativeQAPipeline(generator=generator, retriever=retriever)\n",
|
|
"for question in QUESTIONS:\n",
|
|
" res = pipe.run(query=question, params={\"Generator\": {\"top_k\": 1}, \"Retriever\": {\"top_k\": 5}})\n",
|
|
" print_answers(res, details=\"minimum\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"collapsed": false
|
|
},
|
|
"source": [
|
|
"## About us\n",
|
|
"\n",
|
|
"This [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany\n",
|
|
"\n",
|
|
"We bring NLP to the industry via open source! \n",
|
|
"Our focus: Industry specific language models & large scale QA systems. \n",
|
|
" \n",
|
|
"Some of our other work: \n",
|
|
"- [German BERT](https://deepset.ai/german-bert)\n",
|
|
"- [GermanQuAD and GermanDPR](https://deepset.ai/germanquad)\n",
|
|
"- [FARM](https://github.com/deepset-ai/FARM)\n",
|
|
"\n",
|
|
"Get in touch:\n",
|
|
"[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)\n",
|
|
"\n",
|
|
"By the way: [we're hiring!](https://www.deepset.ai/jobs)"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 2
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython2",
|
|
"version": "2.7.6"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|