Branden Chan 77d4c2ca1c
Benchmark milvus (#850)
* Add milvus benchmarking support

* Add latest docstring and tutorial changes

* Edit config

* Disable docker interactive mode

* Add milvus index type support

* Adjust FAISS and Milvus node branching

* Remove duplicate in config

* Revert method for speedup

* Add latest docstring and tutorial changes

* Add latest benchmark run

* Add latest docstring and tutorial changes

* Add json files

* Revert "Add latest docstring and tutorial changes"

This reverts commit e2efa5f08aa4fb55bbeeed42aa76817d63fc8923.

* Add latest docstring and tutorial changes

* Revert "Add latest docstring and tutorial changes"

This reverts commit b085a679b9d5f175e91c2c59565e73c5dec1374b.

* Fix typo

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2021-04-13 14:54:15 +02:00
..
2021-04-13 14:54:15 +02:00
2020-10-22 15:32:56 +02:00
2021-04-13 14:54:15 +02:00
2021-04-13 09:45:04 +02:00
2021-04-13 14:54:15 +02:00

Benchmarks

Run the benchmarks with the following command:

python run.py [--reader] [--retriever_index] [--retriever_query] [--ci] [--update-json]

You can specify which components and processes to benchmark with the following flags.

--reader will trigger the speed and accuracy benchmarks for the reader. Here we simply use the SQuAD dev set.

--retriever_index will trigger indexing benchmarks

--retriever_query will trigger querying benchmarks (embeddings will be loaded from file instead of being computed on the fly)

--ci will cause the the benchmarks to run on a smaller slice of each dataset and a smaller subset of Retriever / Reader / DocStores.

--update-json will cause the script to update the json files in docs/_src/benchmarks so that the website benchmarks will be updated.