mirror of
https://github.com/Unstructured-IO/unstructured.git
synced 2026-01-08 05:10:11 +00:00
This PR aims to improve the organization and readability of our example documents used in unit tests, specifically focusing on PDF and image files. ### Summary - Created two new subdirectories in the `example-docs` folder: - `pdf/`: for all PDF example files - `img/`: for all image example files - Moved relevant PDF files from `example-docs/` to `example-docs/pdf/` - Moved relevant image files from `example-docs/` to `example-docs/img/` - Updated file paths in affected unit & ingest tests to reflect the new directory structure ### Testing All unit & ingest tests should be updated and verified to work with the new file structure. ## Notes Other file types (e.g., office documents, HTML files) remain in the root of `example-docs/` for now. ## Next Steps Consider similar reorganization for other file types if this structure proves to be beneficial. --------- Co-authored-by: ryannikolaidis <1208590+ryannikolaidis@users.noreply.github.com> Co-authored-by: christinestraub <christinestraub@users.noreply.github.com>
156 lines
13 KiB
JSON
156 lines
13 KiB
JSON
[
|
|
{
|
|
"type": "CompositeElement",
|
|
"element_id": "06c85506db46c8d0e4f014e75146bcfc",
|
|
"text": "0 2 0 2\n\np e S 0 3\n\n] L C . s c [\n\n3 v 6 0 9 4 0 . 4 0 0 2 : v i X r a\n\nDense Passage Retrieval for Open-Domain Question Answering\n\nVladimir Karpukhin\u2217, Barlas O\u02d8guz\u2217, Sewon Min\u2020, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen\u2021, Wen-tau Yih\n\nFacebook AI\n\n\u2020University of Washington\n\n\u2021Princeton University\n\n{vladk, barlaso, plewis, ledell, edunov, scottyih}@fb.com sewon@cs.washington.edu danqic@cs.princeton.edu\n\nAbstract\n\nOpen-domain question answering relies on ef- \ufb01cient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented us- ing dense representations alone, where em- beddings are learned from a small number of questions and passages by a simple dual- encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene- BM25 system greatly by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.1\n\n1",
|
|
"metadata": {
|
|
"data_source": {
|
|
"record_locator": {
|
|
"path": "/home/runner/work/unstructured/unstructured/example-docs/pdf/multi-column-2p.pdf"
|
|
},
|
|
"permissions_data": [
|
|
{
|
|
"mode": 33188
|
|
}
|
|
]
|
|
},
|
|
"filetype": "application/pdf",
|
|
"languages": [
|
|
"eng"
|
|
],
|
|
"page_number": 1
|
|
}
|
|
},
|
|
{
|
|
"type": "CompositeElement",
|
|
"element_id": "3ef998ac1d905d8ff1016f96a243295c",
|
|
"text": "Introduction\n\nOpen-domain question answering (QA) (Voorhees, 1999) is a task that answers factoid questions us- ing a large collection of documents. While early QA systems are often complicated and consist of multiple components (Ferrucci (2012); Moldovan et al. (2003), inter alia), the advances of reading comprehension models suggest a much simpli\ufb01ed two-stage framework: (1) a context retriever \ufb01rst selects a small subset of passages where some of them contain the answer to the question, and then (2) a machine reader can thoroughly exam- ine the retrieved contexts and identify the correct answer (Chen et al., 2017). Although reducing open-domain QA to machine reading is a very rea- sonable strategy, a huge performance degradation is often observed in practice2, indicating the needs of improving retrieval.\n\n\u2217Equal contribution 1The code and trained models have been released at\n\nhttps://github.com/facebookresearch/DPR.",
|
|
"metadata": {
|
|
"data_source": {
|
|
"record_locator": {
|
|
"path": "/home/runner/work/unstructured/unstructured/example-docs/pdf/multi-column-2p.pdf"
|
|
},
|
|
"permissions_data": [
|
|
{
|
|
"mode": 33188
|
|
}
|
|
]
|
|
},
|
|
"filetype": "application/pdf",
|
|
"languages": [
|
|
"eng"
|
|
],
|
|
"page_number": 1
|
|
}
|
|
},
|
|
{
|
|
"type": "CompositeElement",
|
|
"element_id": "71b12f58c99f6097b17f4d5b6147201b",
|
|
"text": "2For instance, the exact match score on SQuAD v1.1 drops\n\nRetrieval in open-domain QA is usually imple- mented using TF-IDF or BM25 (Robertson and Zaragoza, 2009), which matches keywords ef\ufb01- ciently with an inverted index and can be seen as representing the question and context in high- dimensional, sparse vectors (with weighting). Con- versely, the dense, latent semantic encoding is com- plementary to sparse representations by design. For example, synonyms or paraphrases that consist of completely different tokens may still be mapped to vectors close to each other. Consider the question \u201cWho is the bad guy in lord of the rings?\u201d, which can be answered from the context \u201cSala Baker is best known for portraying the villain Sauron in the Lord of the Rings trilogy.\u201d A term-based system would have dif\ufb01culty retrieving such a context, while a dense retrieval system would be able to better match \u201cbad guy\u201d with \u201cvillain\u201d and fetch the cor- rect context. Dense encodings are also learnable by adjusting the embedding functions, which pro- vides additional \ufb02exibility to have a task-speci\ufb01c representation. With special in-memory data struc- tures and indexing schemes, retrieval can be done ef\ufb01ciently using maximum inner product search (MIPS) algorithms (e.g., Shrivastava and Li (2014); Guo et al. (2016)).\n\nHowever, it is generally believed that learn- ing a good dense vector representation needs a large number of labeled pairs of question and con- texts. Dense retrieval methods have thus never be shown to outperform TF-IDF/BM25 for open- domain QA before ORQA (Lee et al., 2019), which proposes a sophisticated inverse cloze task (ICT) objective, predicting the blocks that contain the masked sentence, for additional pretraining. The question encoder and the reader model are then \ufb01ne- tuned using pairs of questions and answers jointly. Although ORQA successfully demonstrates that dense retrieval can outperform BM25, setting new state-of-the-art results on multiple open-domain",
|
|
"metadata": {
|
|
"data_source": {
|
|
"record_locator": {
|
|
"path": "/home/runner/work/unstructured/unstructured/example-docs/pdf/multi-column-2p.pdf"
|
|
},
|
|
"permissions_data": [
|
|
{
|
|
"mode": 33188
|
|
}
|
|
]
|
|
},
|
|
"filetype": "application/pdf",
|
|
"languages": [
|
|
"eng"
|
|
],
|
|
"page_number": 1
|
|
}
|
|
},
|
|
{
|
|
"type": "CompositeElement",
|
|
"element_id": "ef458b0b4659bfd57b11fbfb571c38d1",
|
|
"text": "from above 80% to less than 40% (Yang et al., 2019a).\n\nQA datasets, it also suffers from two weaknesses. First, ICT pretraining is computationally intensive and it is not completely clear that regular sentences are good surrogates of questions in the objective function. Second, because the context encoder is not \ufb01ne-tuned using pairs of questions and answers, the corresponding representations could be subop- timal.",
|
|
"metadata": {
|
|
"data_source": {
|
|
"record_locator": {
|
|
"path": "/home/runner/work/unstructured/unstructured/example-docs/pdf/multi-column-2p.pdf"
|
|
},
|
|
"permissions_data": [
|
|
{
|
|
"mode": 33188
|
|
}
|
|
]
|
|
},
|
|
"filetype": "application/pdf",
|
|
"languages": [
|
|
"eng"
|
|
],
|
|
"page_number": 1
|
|
}
|
|
},
|
|
{
|
|
"type": "CompositeElement",
|
|
"element_id": "4204154eefaa843f79edc96dcc208054",
|
|
"text": "In this paper, we address the question: can we train a better dense embedding model using only pairs of questions and passages (or answers), with- out additional pretraining? By leveraging the now standard BERT pretrained model (Devlin et al., 2019) and a dual-encoder architecture (Bromley et al., 1994), we focus on developing the right training scheme using a relatively small number of question and passage pairs. Through a series of careful ablation studies, our \ufb01nal solution is surprisingly simple: the embedding is optimized for maximizing inner products of the question and relevant passage vectors, with an objective compar- ing all pairs of questions and passages in a batch. Our Dense Passage Retriever (DPR) is exception- ally strong. It not only outperforms BM25 by a large margin (65.2% vs. 42.9% in Top-5 accuracy), but also results in a substantial improvement on the end-to-end QA accuracy compared to ORQA (41.5% vs. 33.3%) in the open Natural Questions setting (Lee et al., 2019; Kwiatkowski et al., 2019). Our contributions are twofold. First, we demon- strate that with the proper training setup, sim- ply \ufb01ne-tuning the question and passage encoders on existing question-passage pairs is suf\ufb01cient to greatly outperform BM25. Our empirical results also suggest that additional pretraining may not be needed. Second, we verify that, in the context of open-domain question answering, a higher retrieval precision indeed translates to a higher end-to-end QA accuracy. By applying a modern reader model to the top retrieved passages, we achieve compara- ble or better results on multiple QA datasets in the open-retrieval setting, compared to several, much complicated systems.",
|
|
"metadata": {
|
|
"data_source": {
|
|
"record_locator": {
|
|
"path": "/home/runner/work/unstructured/unstructured/example-docs/pdf/multi-column-2p.pdf"
|
|
},
|
|
"permissions_data": [
|
|
{
|
|
"mode": 33188
|
|
}
|
|
]
|
|
},
|
|
"filetype": "application/pdf",
|
|
"languages": [
|
|
"eng"
|
|
],
|
|
"page_number": 2
|
|
}
|
|
},
|
|
{
|
|
"type": "CompositeElement",
|
|
"element_id": "43198ac980a699b3b17c5f229aee8656",
|
|
"text": "2 Background\n\nThe problem of open-domain QA studied in this paper can be described as follows. Given a factoid question, such as \u201cWho \ufb01rst voiced Meg on Family Guy?\u201d or \u201cWhere was the 8th Dalai Lama born?\u201d, a system is required to answer it using a large corpus of diversi\ufb01ed topics. More speci\ufb01cally, we assume\n\nthe extractive QA setting, in which the answer is restricted to a span appearing in one or more pas- sages in the corpus. Assume that our collection contains D documents, d1, d2, \u00b7 \u00b7 \u00b7 , dD. We \ufb01rst split each of the documents into text passages of equal lengths as the basic retrieval units3 and get M total passages in our corpus C = {p1, p2, . . . , pM }, where each passage pi can be viewed as a sequence 2 , \u00b7 \u00b7 \u00b7 , w(i) 1 , w(i) of tokens w(i) |pi|. Given a question q, the task is to \ufb01nd a span w(i) s+1, \u00b7 \u00b7 \u00b7 , w(i) s , w(i) from one of the passages pi that can answer the question. Notice that to cover a wide variety of domains, the corpus size can easily range from millions of docu- ments (e.g., Wikipedia) to billions (e.g., the Web). As a result, any open-domain QA system needs to include an ef\ufb01cient retriever component that can se- lect a small set of relevant texts, before applying the reader to extract the answer (Chen et al., 2017).4 Formally speaking, a retriever R : (q, C) \u2192 CF is a function that takes as input a question q and a corpus C and returns a much smaller \ufb01lter set of texts CF \u2282 C, where |CF | = k (cid:28) |C|. For a \ufb01xed k, a retriever can be evaluated in isolation on top-k retrieval accuracy, which is the fraction of ques- tions for which CF contains a span that answers the question.\n\ne",
|
|
"metadata": {
|
|
"data_source": {
|
|
"record_locator": {
|
|
"path": "/home/runner/work/unstructured/unstructured/example-docs/pdf/multi-column-2p.pdf"
|
|
},
|
|
"permissions_data": [
|
|
{
|
|
"mode": 33188
|
|
}
|
|
]
|
|
},
|
|
"filetype": "application/pdf",
|
|
"languages": [
|
|
"eng"
|
|
],
|
|
"page_number": 2
|
|
}
|
|
},
|
|
{
|
|
"type": "CompositeElement",
|
|
"element_id": "82cfad702e5779169139f705fd0af5ee",
|
|
"text": "3 Dense Passage Retriever (DPR)\n\nWe focus our research in this work on improv- ing the retrieval component in open-domain QA. Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve ef\ufb01ciently the top k passages relevant to the input question for the reader at run-time. Note that M can be very large (e.g., 21 million passages in our experiments, de- scribed in Section 4.1) and k is usually small, such as 20\u2013100.\n\n3.1 Overview\n\nOur dense passage retriever (DPR) uses a dense encoder EP (\u00b7) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval.\n\n3The ideal size and boundary of a text passage are func- tions of both the retriever and reader. We also experimented with natural paragraphs in our preliminary trials and found that using \ufb01xed-length passages performs better in both retrieval and \ufb01nal QA accuracy, as observed by Wang et al. (2019).\n\n4Exceptions include (Seo et al., 2019) and (Roberts et al., 2020), which retrieves and generates the answers, respectively.",
|
|
"metadata": {
|
|
"data_source": {
|
|
"record_locator": {
|
|
"path": "/home/runner/work/unstructured/unstructured/example-docs/pdf/multi-column-2p.pdf"
|
|
},
|
|
"permissions_data": [
|
|
{
|
|
"mode": 33188
|
|
}
|
|
]
|
|
},
|
|
"filetype": "application/pdf",
|
|
"languages": [
|
|
"eng"
|
|
],
|
|
"page_number": 2
|
|
}
|
|
}
|
|
] |