[](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial11_Pipelines.ipynb)
In this tutorial, you will learn how the `Pipeline` class acts as a connector between all the different
building blocks that are found in FARM. Whether you are using a Reader, Generator, Summarizer
or Retriever (or 2), the `Pipeline` class will help you build a Directed Acyclic Graph (DAG) that
determines how to route the output of one component into the input of another.
## Setting Up the Environment
Let's start by ensuring we have a GPU running to ensure decent speed in this tutorial.
In Google colab, you can change to a GPU runtime in the menu:
and we encourage our users to design their own if they don't see on that fits their use case
The only requirements are:
- Add a method run(self, **kwargs) to your class. **kwargs will contain the output from the previous node in your graph.
- Do whatever you want within run() (e.g. reformatting the query)
- Return a tuple that contains your output data (for the next node)
and the name of the outgoing edge (by default "output_1" for nodes that have one output)
- Add a class attribute outgoing_edges = 1 that defines the number of output options from your node. You only need a higher number here if you have a decision node (see below).
Here we have a template for a Node:
```python
class NodeTemplate():
outgoing_edges = 1
def run(self, **kwargs):
# Insert code here to manipulate the variables in kwarg
return (kwargs, "output_1")
```
## Decision Nodes
Decision Nodes help you route your data so that only certain branches of your `Pipeline` are run.
One popular use case for such query classifiers is routing keyword queries to Elasticsearch and questions to DPR + Reader.
With this approach you keep optimal speed and simplicity for keywords while going deep with transformers when it's most helpful.
# Run only the dense retriever on the full sentence query
res_1 = p_classifier.run(
query="Who is the father of Arya Stark?",
top_k_retriever=10
)
print("DPR Results" + "\n" + "="*15)
print_answers(res_1)
# Run only the sparse retriever on a keyword based query
res_2 = p_classifier.run(
query="Arya Stark father",
top_k_retriever=10
)
print("ES Results" + "\n" + "="*15)
print_answers(res_2)
```
## Evaluation Nodes
We have also designed a set of nodes that can be used to evaluate the performance of a system.
Have a look at our [tutorial](https://haystack.deepset.ai/docs/latest/tutorial5md) to get hands on with the code and learn more about Evaluation Nodes!
## YAML Configs
A full `Pipeline` can be defined in a YAML file and simply loaded.
Having your pipeline available in a YAML is particularly useful
when you move between experimentation and production environments.
Just export the YAML from your notebook / IDE and import it into your production environment.
It also helps with version control of pipelines,
allows you to share your pipeline easily with colleagues,
and simplifies the configuration of pipeline parameters in production.
It consists of two main sections: you define all objects (e.g. a reader) in components
and then stick them together to a pipeline in pipelines.
You can also set one component to be multiple nodes of a pipeline or to be a node across multiple pipelines.
It will be loaded just once in memory and therefore doesn't hurt your resources more than actually needed.
The contents of a YAML file should look something like this:
```yaml
version: '0.7'
components: # define all the building-blocks for Pipeline
- name: MyReader # custom-name for the component; helpful for visualization & debugging
type: FARMReader # Haystack Class name for the component
params:
no_ans_boost: -10
model_name_or_path: deepset/roberta-base-squad2
- name: MyESRetriever
type: ElasticsearchRetriever
params:
document_store: MyDocumentStore # params can reference other components defined in the YAML
custom_query: null
- name: MyDocumentStore
type: ElasticsearchDocumentStore
params:
index: haystack_test
pipelines: # multiple Pipelines can be defined using the components from above
- name: my_query_pipeline # a simple extractive-qa Pipeline
nodes:
- name: MyESRetriever
inputs: [Query]
- name: MyReader
inputs: [MyESRetriever]
```
To load, simply call:
``` python
pipeline.load_from_yaml(Path("sample.yaml"))
```
## Conclusion
The possibilities are endless with the `Pipeline` class and we hope that this tutorial will inspire you
to build custom pipeplines that really work for your use case!
## About us
This [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our other work:
- [German BERT](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR](https://deepset.ai/germanquad)