haystack/tutorials/Tutorial8_Preprocessing.ipynb
Sara Zan d470b9d0bd
Improve dependency management (#1994)
* Fist attempt at using setup.cfg for dependency management

* Trying the new package on the CI and in Docker too

* Add composite extras_require

* Add the safe_import function for document store imports and add some try-catch statements on rest_api and ui imports

* Fix bug on class import and rephrase error message

* Introduce typing for optional modules and add type: ignore in sparse.py

* Include importlib_metadata backport for py3.7

* Add colab group to extra_requires

* Fix pillow version

* Fix grpcio

* Separate out the crawler as another extra

* Make paths relative in rest_api and ui

* Update the test matrix in the CI

* Add try catch statements around the optional imports too to account for direct imports

* Never mix direct deps with self-references and add ES deps to the base install

* Refactor several paths in tests to make them insensitive to the execution path

* Include tstadel review and re-introduce Milvus1 in the tests suite, to fix

* Wrap pdf conversion utils into safe_import

* Update some tutorials and rever Milvus1 as default for now, see #2067

* Fix mypy config


Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2022-01-26 18:12:55 +01:00

517 lines
16 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"# Preprocessing\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial8_Preprocessing.ipynb)\n",
"\n",
"Haystack includes a suite of tools to extract text from different file types, normalize white space\n",
"and split text into smaller pieces to optimize retrieval.\n",
"These data preprocessing steps can have a big impact on the systems performance and effective handling of data is key to getting the most out of Haystack."
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"Ultimately, Haystack expects data to be provided as a list documents in the following dictionary format:\n",
"``` python\n",
"docs = [\n",
" {\n",
" 'content': DOCUMENT_TEXT_HERE,\n",
" 'meta': {'name': DOCUMENT_NAME, ...}\n",
" }, ...\n",
"]\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"This tutorial will show you all the tools that Haystack provides to help you cast your data into this format."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Install the latest release of Haystack in your own environment \n",
"#! pip install farm-haystack\n",
"\n",
"# Install the latest master of Haystack\n",
"!pip install --upgrade pip\n",
"!pip install git+https://github.com/deepset-ai/haystack.git#egg=farm-haystack[colab,ocr]\n",
"\n",
"!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz\n",
"!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"01/06/2021 14:49:14 - INFO - faiss - Loading faiss with AVX2 support.\n",
"01/06/2021 14:49:14 - INFO - faiss - Loading faiss.\n"
]
}
],
"source": [
"# Here are the imports we need\n",
"from haystack.nodes import TextConverter, PDFToTextConverter, DocxToTextConverter, PreProcessor\n",
"from haystack.utils import convert_files_to_dicts, fetch_archive_from_http"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"01/05/2021 12:02:30 - INFO - haystack.preprocessor.utils - Fetching from https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/preprocessing_tutorial.zip to `data/preprocessing_tutorial`\n",
"100%|██████████| 595119/595119 [00:00<00:00, 5299765.39B/s]\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 29,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This fetches some sample files to work with\n",
"\n",
"doc_dir = \"data/preprocessing_tutorial\"\n",
"s3_url = \"https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/preprocessing_tutorial.zip\"\n",
"fetch_archive_from_http(url=s3_url, output_dir=doc_dir)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Converters\n",
"\n",
"Haystack's converter classes are designed to help you turn files on your computer into the documents\n",
"that can be processed by the Haystack pipeline.\n",
"There are file converters for txt, pdf, docx files as well as a converter that is powered by Apache Tika.\n",
"The parameter `valid_langugages` does not convert files to the target language, but checks if the conversion worked as expected.\n",
"For converting PDFs, try changing the encoding to UTF-8 if the conversion isn't great."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Here are some examples of how you would use file converters\n",
"\n",
"converter = TextConverter(remove_numeric_tables=True, valid_languages=[\"en\"])\n",
"doc_txt = converter.convert(file_path=\"data/preprocessing_tutorial/classics.txt\", meta=None)[0]\n",
"\n",
"converter = PDFToTextConverter(remove_numeric_tables=True, valid_languages=[\"en\"])\n",
"doc_pdf = converter.convert(file_path=\"data/preprocessing_tutorial/bert.pdf\", meta=None)[0]\n",
"\n",
"converter = DocxToTextConverter(remove_numeric_tables=False, valid_languages=[\"en\"])\n",
"doc_docx = converter.convert(file_path=\"data/preprocessing_tutorial/heavy_metal.docx\", meta=None)[0]\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"01/06/2021 14:51:06 - INFO - haystack.preprocessor.utils - Converting data/preprocessing_tutorial/heavy_metal.docx\n",
"01/06/2021 14:51:06 - INFO - haystack.preprocessor.utils - Converting data/preprocessing_tutorial/bert.pdf\n",
"01/06/2021 14:51:07 - INFO - haystack.preprocessor.utils - Converting data/preprocessing_tutorial/classics.txt\n"
]
}
],
"source": [
"# Haystack also has a convenience function that will automatically apply the right converter to each file in a directory.\n",
"\n",
"all_docs = convert_files_to_dicts(dir_path=\"data/preprocessing_tutorial\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## PreProcessor\n",
"\n",
"The PreProcessor class is designed to help you clean text and split text into sensible units.\n",
"File splitting can have a very significant impact on the system's performance and is absolutely mandatory for Dense Passage Retrieval models.\n",
"In general, we recommend you split the text from your files into small documents of around 100 words for dense retrieval methods\n",
"and no more than 10,000 words for sparse methods.\n",
"Have a look at the [Preprocessing](https://haystack.deepset.ai/docs/latest/preprocessingmd)\n",
"and [Optimization](https://haystack.deepset.ai/docs/latest/optimizationmd) pages on our website for more details."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"n_docs_input: 1\n",
"n_docs_output: 51\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"[nltk_data] Downloading package punkt to /home/branden/nltk_data...\n",
"[nltk_data] Package punkt is already up-to-date!\n"
]
}
],
"source": [
"# This is a default usage of the PreProcessor.\n",
"# Here, it performs cleaning of consecutive whitespaces\n",
"# and splits a single large document into smaller documents.\n",
"# Each document is up to 1000 words long and document breaks cannot fall in the middle of sentences\n",
"# Note how the single document passed into the document gets split into 5 smaller documents\n",
"\n",
"preprocessor = PreProcessor(\n",
" clean_empty_lines=True,\n",
" clean_whitespace=True,\n",
" clean_header_footer=False,\n",
" split_by=\"word\",\n",
" split_length=100,\n",
" split_respect_sentence_boundary=True\n",
")\n",
"docs_default = preprocessor.process(doc_txt)\n",
"print(f\"n_docs_input: 1\\nn_docs_output: {len(docs_default)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Cleaning\n",
"\n",
"- `clean_empty_lines` will normalize 3 or more consecutive empty lines to be just a two empty lines\n",
"- `clean_whitespace` will remove any whitespace at the beginning or end of each line in the text\n",
"- `clean_header_footer` will remove any long header or footer texts that are repeated on each page"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Splitting\n",
"By default, the PreProcessor will respect sentence boundaries, meaning that documents will not start or end\n",
"midway through a sentence.\n",
"This will help reduce the possibility of answer phrases being split between two documents.\n",
"This feature can be turned off by setting `split_respect_sentence_boundary=False`."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"RESPECTING SENTENCE BOUNDARY\n",
"End of document: \"...cornerstone of a typical elite European education.\"\n",
"\n",
"NOT RESPECTING SENTENCE BOUNDARY\n",
"End of document: \"...on. In England, for instance, Oxford and Cambridge\"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"[nltk_data] Downloading package punkt to /home/branden/nltk_data...\n",
"[nltk_data] Package punkt is already up-to-date!\n"
]
}
],
"source": [
"# Not respecting sentence boundary vs respecting sentence boundary\n",
"\n",
"preprocessor_nrsb = PreProcessor(split_respect_sentence_boundary=False)\n",
"docs_nrsb = preprocessor_nrsb.process(doc_txt)\n",
"\n",
"print(\"RESPECTING SENTENCE BOUNDARY\")\n",
"end_text = docs_default[0][\"content\"][-50:]\n",
"print(\"End of document: \\\"...\" + end_text + \"\\\"\")\n",
"print()\n",
"print(\"NOT RESPECTING SENTENCE BOUNDARY\")\n",
"end_text_nrsb = docs_nrsb[0][\"content\"][-50:]\n",
"print(\"End of document: \\\"...\" + end_text_nrsb + \"\\\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"A commonly used strategy to split long documents, especially in the field of Question Answering,\n",
"is the sliding window approach. If `split_length=10` and `split_overlap=3`, your documents will look like this:\n",
"\n",
"- doc1 = words[0:10]\n",
"- doc2 = words[7:17]\n",
"- doc3 = words[14:24]\n",
"- ...\n",
"\n",
"You can use this strategy by following the code below."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1: \"Classics or classical studies is the study of classical antiquity,...\"\n",
"Document 2: \"of classical antiquity, and in the Western world traditionally refers...\"\n",
"Document 3: \"world traditionally refers to the study of Classical Greek and...\"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"[nltk_data] Downloading package punkt to /home/branden/nltk_data...\n",
"[nltk_data] Package punkt is already up-to-date!\n"
]
}
],
"source": [
"# Sliding window approach\n",
"\n",
"preprocessor_sliding_window = PreProcessor(\n",
" split_overlap=3,\n",
" split_length=10,\n",
" split_respect_sentence_boundary=False\n",
")\n",
"docs_sliding_window = preprocessor_sliding_window.process(doc_txt)\n",
"\n",
"doc1 = docs_sliding_window[0][\"content\"][:200]\n",
"doc2 = docs_sliding_window[1][\"content\"][:100]\n",
"doc3 = docs_sliding_window[2][\"content\"][:100]\n",
"\n",
"print(\"Document 1: \\\"\" + doc1 + \"...\\\"\")\n",
"print(\"Document 2: \\\"\" + doc2 + \"...\\\"\")\n",
"print(\"Document 3: \\\"\" + doc3 + \"...\\\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Bringing it all together"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"01/06/2021 14:56:12 - INFO - haystack.preprocessor.utils - Converting data/preprocessing_tutorial/heavy_metal.docx\n",
"01/06/2021 14:56:12 - INFO - haystack.preprocessor.utils - Converting data/preprocessing_tutorial/bert.pdf\n",
"01/06/2021 14:56:12 - INFO - haystack.preprocessor.utils - Converting data/preprocessing_tutorial/classics.txt\n",
"[nltk_data] Downloading package punkt to /home/branden/nltk_data...\n",
"[nltk_data] Package punkt is already up-to-date!\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"n_files_input: 3\n",
"n_docs_output: 150\n"
]
}
],
"source": [
"all_docs = convert_files_to_dicts(dir_path=\"data/preprocessing_tutorial\")\n",
"preprocessor = PreProcessor(\n",
" clean_empty_lines=True,\n",
" clean_whitespace=True,\n",
" clean_header_footer=False,\n",
" split_by=\"word\",\n",
" split_length=100,\n",
" split_respect_sentence_boundary=True\n",
")\n",
"docs = preprocessor.process(all_docs)\n",
"\n",
"print(f\"n_files_input: {len(all_docs)}\\nn_docs_output: {len(docs)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## About us\n",
"\n",
"This [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany\n",
"\n",
"We bring NLP to the industry via open source! \n",
"Our focus: Industry specific language models & large scale QA systems. \n",
" \n",
"Some of our other work: \n",
"- [German BERT](https://deepset.ai/german-bert)\n",
"- [GermanQuAD and GermanDPR](https://deepset.ai/germanquad)\n",
"- [FARM](https://github.com/deepset-ai/FARM)\n",
"\n",
"Get in touch:\n",
"[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)\n",
"\n",
"By the way: [we're hiring!](https://www.deepset.ai/jobs)\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}