diff --git a/docs/_src/tutorials/tutorials/11.md b/docs/_src/tutorials/tutorials/11.md index b6a30ec02..4d0e434e3 100644 --- a/docs/_src/tutorials/tutorials/11.md +++ b/docs/_src/tutorials/tutorials/11.md @@ -224,7 +224,7 @@ with the keyword based `ElasticsearchRetriever`. See our [documentation](https://haystack.deepset.ai/docs/latest/retrievermd) to understand why we might want to combine a dense and sparse retriever. -![image]() +![image](https://github.com/deepset-ai/haystack/blob/master/docs/_src/img/tutorial11_custompipelines_pipeline_ensemble.png?raw=true) Here we use a `JoinDocuments` node so that the predictions from each retriever can be merged together. @@ -285,7 +285,7 @@ Decision Nodes help you route your data so that only certain branches of your `P One popular use case for such query classifiers is routing keyword queries to Elasticsearch and questions to EmbeddingRetriever + Reader. With this approach you keep optimal speed and simplicity for keywords while going deep with transformers when it's most helpful. -![image]() +![image](https://github.com/deepset-ai/haystack/blob/master/docs/_src/img/tutorial11_decision_nodes_pipeline_classifier.png?raw=true) Though this looks very similar to the ensembled pipeline shown above, the key difference is that only one of the retrievers is run for each request. diff --git a/tutorials/Tutorial11_Pipelines.ipynb b/tutorials/Tutorial11_Pipelines.ipynb index 6b94fd263..2ecd7087b 100644 --- a/tutorials/Tutorial11_Pipelines.ipynb +++ b/tutorials/Tutorial11_Pipelines.ipynb @@ -438,7 +438,7 @@ "See our [documentation](https://haystack.deepset.ai/docs/latest/retrievermd) to understand why\n", "we might want to combine a dense and sparse retriever.\n", "\n", - "![image]()\n", + "![image](https://github.com/deepset-ai/haystack/blob/master/docs/_src/img/tutorial11_custompipelines_pipeline_ensemble.png?raw=true)\n", "\n", "Here we use a `JoinDocuments` node so that the predictions from each retriever can be merged together." ] @@ -537,7 +537,7 @@ "One popular use case for such query classifiers is routing keyword queries to Elasticsearch and questions to EmbeddingRetriever + Reader.\n", "With this approach you keep optimal speed and simplicity for keywords while going deep with transformers when it's most helpful.\n", "\n", - "![image]()\n", + "![image](https://github.com/deepset-ai/haystack/blob/master/docs/_src/img/tutorial11_decision_nodes_pipeline_classifier.png?raw=true)\n", "\n", "Though this looks very similar to the ensembled pipeline shown above,\n", "the key difference is that only one of the retrievers is run for each request.\n",