mirror of
https://github.com/deepset-ai/haystack.git
synced 2026-01-01 01:27:28 +00:00
* Add milvus benchmarking support * Add latest docstring and tutorial changes * Edit config * Disable docker interactive mode * Add milvus index type support * Adjust FAISS and Milvus node branching * Remove duplicate in config * Revert method for speedup * Add latest docstring and tutorial changes * Add latest benchmark run * Add latest docstring and tutorial changes * Add json files * Revert "Add latest docstring and tutorial changes" This reverts commit e2efa5f08aa4fb55bbeeed42aa76817d63fc8923. * Add latest docstring and tutorial changes * Revert "Add latest docstring and tutorial changes" This reverts commit b085a679b9d5f175e91c2c59565e73c5dec1374b. * Fix typo Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
67 lines
3.2 KiB
JSON
67 lines
3.2 KiB
JSON
{
|
|
"chart_type": "BarChart",
|
|
"title": "Retriever Performance",
|
|
"subtitle": "Time and Accuracy Benchmarks",
|
|
"description": "Comparison of the speed and accuracy of different DocumentStore / Retriever combinations on 100k documents. <b>Indexing speed</b> (in docs/sec) refers to how quickly Documents can be inserted into a DocumentStore. <b>Querying speed</b> (in queries/sec) refers to the speed at which the system returns relevant Documents when presented with a query.\n\nThe dataset used is Wikipedia, split into 100 word passages (from <a href='https://github.com/facebookresearch/DPR/blob/master/dpr/data/download_data.py'>here</a>)). \n\nFor querying, we use the Natural Questions development set in combination with the wiki passages. The Document Store is populated with the 100 word passages in which the answer spans occur (i.e. gold passages) as well as a random selection of 100 word passages in which the answer spans do not occur (i.e. negative passages). We take a total of 100k gold and negative passages. Query and document embedding are generated by the <i>\"facebook/dpr-question_encoder-single-nq-base\"</i> and <i>\"facebook/dpr-ctx_encoder-single-nq-base\"</i> models. The retriever returns 10 candidates and both the recall and mAP scores are calculated on these 10.\n\nFor FAISS HNSW, we use <i>n_links=128</i>, <i>efSearch=20</i> and <i>efConstruction=80</i>. We use a cosine similarity function with BM25 retrievers, and dot product with DPR. Both index and query benchmarks are performed on an AWS P3.2xlarge instance which is accelerated by an Nvidia V100 GPU.",
|
|
"bars": "horizontal",
|
|
"columns": [
|
|
"Model",
|
|
"mAP",
|
|
"Index Speed (docs/sec)",
|
|
"Query Speed (queries/sec)"
|
|
],
|
|
"series": {
|
|
"s0": "map",
|
|
"s1": "time",
|
|
"s2": "time"
|
|
},
|
|
"axes": {
|
|
"label": "map",
|
|
"time_side": "top",
|
|
"time_label": "seconds"
|
|
},
|
|
"data": [
|
|
{
|
|
"model": "BM25 / ElasticSearch",
|
|
"n_docs": 100000,
|
|
"index_speed": 485.5602670200369,
|
|
"query_speed": 165.51512861040828,
|
|
"map": 56.259591531012504
|
|
},
|
|
{
|
|
"model": "DPR / ElasticSearch",
|
|
"n_docs": 100000,
|
|
"index_speed": 71.36964873196698,
|
|
"query_speed": 5.355677072083696,
|
|
"map": 86.54606328368973
|
|
},
|
|
{
|
|
"model": "DPR / FAISS (flat)",
|
|
"n_docs": 100000,
|
|
"index_speed": 100.01184910084558,
|
|
"query_speed": 6.624479268751268,
|
|
"map": 86.54606328368973
|
|
},
|
|
{
|
|
"model": "DPR / FAISS (HNSW)",
|
|
"n_docs": 100000,
|
|
"index_speed": 89.90389306648805,
|
|
"query_speed": 40.68196225525062,
|
|
"map": 84.33419639513305
|
|
},
|
|
{
|
|
"model": "DPR / Milvus (flat)",
|
|
"n_docs": 100000,
|
|
"index_speed": 116.00982709720004,
|
|
"query_speed": 28.30393009791128,
|
|
"map": 86.54606328368973
|
|
},
|
|
{
|
|
"model": "DPR / Milvus (HNSW)",
|
|
"n_docs": 100000,
|
|
"index_speed": 115.61076852516383,
|
|
"query_speed": 28.076443272229284,
|
|
"map": 86.54606328368973
|
|
}
|
|
]
|
|
} |