Branden Chan f3a3b73d9b
Choose correct similarity fns during benchmark runs & re-run benchmarks (#773)
* Adapt to new dataset_from_dicts return signature

* rename fn

* Align similarity fn in benchmark doc store

* Better choice of similarity fn

* Increase postgres wait time

* Add more expected returned variables

* update benchmark results

* Fix typo

* update all benchmark runs

* multiply stats by 100

* Specify similarity fns for website

Co-authored-by: Malte Pietsch <malte.pietsch@deepset.ai>
2021-02-03 11:45:18 +01:00
..
2020-10-22 15:32:56 +02:00
2020-10-21 17:59:44 +02:00
2020-12-02 16:59:42 +01:00

Benchmarks

Run the benchmarks with the following command:

python run.py [--reader] [--retriever_index] [--retriever_query] [--ci] [--update-json]

You can specify which components and processes to benchmark with the following flags.

--reader will trigger the speed and accuracy benchmarks for the reader. Here we simply use the SQuAD dev set.

--retriever_index will trigger indexing benchmarks

--retriever_query will trigger querying benchmarks (embeddings will be loaded from file instead of being computed on the fly)

--ci will cause the the benchmarks to run on a smaller slice of each dataset and a smaller subset of Retriever / Reader / DocStores.

--update-json will cause the script to update the json files in docs/_src/benchmarks so that the website benchmarks will be updated.