diff --git a/README.rst b/README.rst index 31531cd55..d5f231c1e 100644 --- a/README.rst +++ b/README.rst @@ -133,9 +133,10 @@ ElasticsearchRetriever Scoring text similarity via sparse Bag-of-words representations are strong and well-established baselines in Information Retrieval. The default :code:`ElasticsearchRetriever` uses Elasticsearch's native scoring (BM25), but can be extended easily with custom queries or filtering. -Example:: +Example .. code-block:: python + retriever = ElasticsearchRetriever(document_store=document_store, custom_query=None) retriever.retrieve(query="Why did the revenue increase?", filters={"years": ["2019"], "company": ["Q1", "Q2"]}) # returns: [Document, Document] @@ -145,9 +146,10 @@ EmbeddingRetriever Using dense embeddings (i.e. vector representations) of texts is a powerful alternative to score similarity of texts. This retriever allows you to transform your query into an embedding using a model (e.g. Sentence-BERT) and find similar texts by using cosine similarity. -Example:: +Example .. code-block:: python + retriever = EmbeddingRetriever(document_store=document_store, embedding_model="deepset/sentence-bert", model_format="farm") @@ -168,9 +170,10 @@ Both readers can load either a local model or any public model from `Hugging Fa FARMReader ^^^^^^^^^^ Implementing various QA models via the `FARM `_ Framework. -Example:: +Example .. code-block:: python + reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", use_gpu=False, no_ans_boost=-10, context_window_size=500, top_k_per_candidate=3, top_k_per_sample=1, @@ -194,9 +197,10 @@ TransformersReader ^^^^^^^^^^^^^^^^^^ Implementing various QA models via the :code:`pipeline` class of `Transformers `_ Framework. -Example:: +Example .. code-block:: python + reader = TransformersReader(model="distilbert-base-uncased-distilled-squad", tokenizer="distilbert-base-uncased", context_window_size=500,