* Add BasePipeline.validate_config, BasePipeline.validate_yaml, and some new custom exception classes * Make error composition work properly * Clarify typing * Help mypy a bit more * Update Documentation & Code Style * Enable autogenerated docs for Milvus1 and 2 separately * Revert "Enable autogenerated docs for Milvus1 and 2 separately" This reverts commit 282be4a78a6e95862a9b4c924fc3dea5ca71e28d. * Update Documentation & Code Style * Re-enable 'additionalProperties: False' * Add pipeline.type to JSON Schema, was somehow forgotten * Disable additionalProperties on the pipeline properties too * Fix json-schemas for 1.1.0 and 1.2.0 (should not do it again in the future) * Cal super in PipelineValidationError * Improve _read_pipeline_config_from_yaml's error handling * Fix generate_json_schema.py to include document stores * Fix json schemas (retro-fix 1.1.0 again) * Improve custom errors printing, add link to docs * Add function in BaseComponent to list its subclasses in a module * Make some document stores base classes abstract * Add marker 'integration' in pytest flags * Slighly improve validation of pipelines at load * Adding tests for YAML loading and validation * Make custom_query Optional for validation issues * Fix bug in _read_pipeline_config_from_yaml * Improve error handling in BasePipeline and Pipeline and add DAG check * Move json schema generation into haystack/nodes/_json_schema.py (useful for tests) * Simplify errors slightly * Add some YAML validation tests * Remove load_from_config from BasePipeline, it was never used anyway * Improve tests * Include json-schemas in package * Fix conftest imports * Make BasePipeline abstract * Improve mocking by making the test independent from the YAML version * Add exportable_to_yaml decorator to forget about set_config on mock nodes * Fix mypy errors * Comment out one monkeypatch * Fix typing again * Improve error message for validation * Add required properties to pipelines * Fix YAML version for REST API YAMLs to 1.2.0 * Fix load_from_yaml call in load_from_deepset_cloud * fix HaystackError.__getattr__ * Add super().__init__()in most nodes and docstore, comment set_config * Remove type from REST API pipelines * Remove useless init from doc2answers * Call super in Seq3SeqGenerator * Typo in deepsetcloud.py * Fix rest api indexing error mismatch and mock version of JSON schema in all tests * Working on pipeline tests * Improve errors printing slightly * Add back test_pipeline.yaml * _json_schema.py supports different versions with identical schemas * Add type to 0.7 schema for backwards compatibility * Fix small bug in _json_schema.py * Try alternative to generate json schemas on the CI * Update Documentation & Code Style * Make linux CI match autoformat CI * Fix super-init-not-called * Accidentally committed file * Update Documentation & Code Style * fix test_summarizer_translation.py's import * Mock YAML in a few suites, split and simplify test_pipeline_debug_and_validation.py::test_invalid_run_args * Fix json schema for ray tests too * Update Documentation & Code Style * Reintroduce validation * Usa unstable version in tests and rest api * Make unstable support the latest versions * Update Documentation & Code Style * Remove needless fixture * Make type in pipeline optional in the strings validation * Fix schemas * Fix string validation for pipeline type * Improve validate_config_strings * Remove type from test p[ipelines * Update Documentation & Code Style * Fix test_pipeline * Removing more type from pipelines * Temporary CI patc * Fix issue with exportable_to_yaml never invoking the wrapped init * rm stray file * pipeline tests are green again * Linux CI now needs .[all] to generate the schema * Bugfixes, pipeline tests seems to be green * Typo in version after merge * Implement missing methods in Weaviate * Trying to avoid FAISS tests from running in the Milvus1 test suite * Fix some stray test paths and faiss index dumping * Fix pytest markers list * Temporarily disable cache to be able to see tests failures * Fix pyproject.toml syntax * Use only tmp_path * Fix preprocessor signature after merge * Fix faiss bug * Fix Ray test * Fix documentation issue by removing quotes from faiss type * Update Documentation & Code Style * use document properly in preprocessor tests * Update Documentation & Code Style * make preprocessor capable of handling documents * import document * Revert support for documents in preprocessor, do later * Fix bug in _json_schema.py that was breaking validation * re-enable cache * Update Documentation & Code Style * Simplify calling _json_schema.py from the CI * Remove redundant ABC inheritance * Ensure exportable_to_yaml works only on implementations * Rename subclass to class_ in Meta * Make run() and get_config() abstract in BasePipeline * Revert unintended change in preprocessor * Move outgoing_edges_input_node check inside try block * Rename VALID_CODE_GEN_INPUT_REGEX into VALID_INPUT_REGEX * Add check for a RecursionError on validate_config_strings * Address usages of _pipeline_config in data silo and elasticsearch * Rename _pipeline_config into _init_parameters * Fix pytest marker and remove unused imports * Remove most redundant ABCs * Rename _init_parameters into _component_configuration * Remove set_config and type from _component_configuration's dict * Remove last instances of set_config and replace with super().__init__() * Implement __init_subclass__ approach * Simplify checks on the existence of _component_configuration * Fix faiss issue * Dynamic generation of node schemas & weed out old schemas * Add debatable test * Add docstring to debatable test * Positive diff between schemas implemented * Improve diff printing * Rename REST API YAML files to trigger IDE validation * Fix typing issues * Fix more typing * Typo in YAML filename * Remove needless type:ignore * Add tests * Fix tests & validation feedback for accessory classes in custom nodes * Refactor RAGeneratorType out * Fix broken import in conftest * Improve source error handling * Remove unused import in test_eval.py breaking tests * Fix changed error message in tests matches too * Normalize generate_openapi_specs.py and generate_json_schema.py in the actions * Fix path to generate_openapi_specs.py in autoformat.yml * Update Documentation & Code Style * Add test for FAISSDocumentStore-like situations (superclass with init params) * Update Documentation & Code Style * Fix indentation * Remove commented set_config * Store model_name_or_path in FARMReader to use in DistillationDataSilo * Rename _component_configuration into _component_config * Update Documentation & Code Style Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
8.7 KiB
Module base
BaseGenerator
class BaseGenerator(BaseComponent)
Abstract class for Generators
predict
@abstractmethod
def predict(query: str, documents: List[Document], top_k: Optional[int]) -> Dict
Abstract method to generate answers.
Arguments:
query: Querydocuments: Related documents (e.g. coming from a retriever) that the answer shall be conditioned on.top_k: Number of returned answers
Returns:
Generated answers plus additional infos in a dict
Module transformers
RAGenerator
class RAGenerator(BaseGenerator)
Implementation of Facebook's Retrieval-Augmented Generator (https://arxiv.org/abs/2005.11401) based on HuggingFace's transformers (https://huggingface.co/transformers/model_doc/rag.html).
Instead of "finding" the answer within a document, these models generate the answer. In that sense, RAG follows a similar approach as GPT-3 but it comes with two huge advantages for real-world applications: a) it has a manageable model size b) the answer generation is conditioned on retrieved documents, i.e. the model can easily adjust to domain documents even after training has finished (in contrast: GPT-3 relies on the web data seen during training)
Example
| query = "who got the first nobel prize in physics?"
|
| # Retrieve related documents from retriever
| retrieved_docs = retriever.retrieve(query=query)
|
| # Now generate answer from query and retrieved documents
| generator.predict(
| query=query,
| documents=retrieved_docs,
| top_k=1
| )
|
| # Answer
|
| {'query': 'who got the first nobel prize in physics',
| 'answers':
| [{'query': 'who got the first nobel prize in physics',
| 'answer': ' albert einstein',
| 'meta': { 'doc_ids': [...],
| 'doc_scores': [80.42758 ...],
| 'doc_probabilities': [40.71379089355469, ...
| 'content': ['Albert Einstein was a ...]
| 'titles': ['"Albert Einstein"', ...]
| }}]}
__init__
def __init__(model_name_or_path: str = "facebook/rag-token-nq", model_version: Optional[str] = None, retriever: Optional[DensePassageRetriever] = None, generator_type: str = "token", top_k: int = 2, max_length: int = 200, min_length: int = 2, num_beams: int = 2, embed_title: bool = True, prefix: Optional[str] = None, use_gpu: bool = True)
Load a RAG model from Transformers along with passage_embedding_model.
See https://huggingface.co/transformers/model_doc/rag.html for more details
Arguments:
model_name_or_path: Directory of a saved model or the name of a public model e.g. 'facebook/rag-token-nq', 'facebook/rag-sequence-nq'. See https://huggingface.co/models for full list of available models.model_version: The version of model to use from the HuggingFace model hub. Can be tag name, branch name, or commit hash.retriever:DensePassageRetrieverused to embedded passages for the docs passed topredict(). This is optional and is only needed if the docs you pass don't already contain embeddings inDocument.embedding.generator_type: Which RAG generator implementation to use ("token" or "sequence")top_k: Number of independently generated text to returnmax_length: Maximum length of generated textmin_length: Minimum length of generated textnum_beams: Number of beams for beam search. 1 means no beam search.embed_title: Embedded the title of passage while generating embeddingprefix: The prefix used by the generator's tokenizer.use_gpu: Whether to use GPU. Falls back on CPU if no GPU is available.
predict
def predict(query: str, documents: List[Document], top_k: Optional[int] = None) -> Dict
Generate the answer to the input query. The generation will be conditioned on the supplied documents.
These document can for example be retrieved via the Retriever.
Arguments:
query: Querydocuments: Related documents (e.g. coming from a retriever) that the answer shall be conditioned on.top_k: Number of returned answers
Returns:
Generated answers plus additional infos in a dict like this:
| {'query': 'who got the first nobel prize in physics',
| 'answers':
| [{'query': 'who got the first nobel prize in physics',
| 'answer': ' albert einstein',
| 'meta': { 'doc_ids': [...],
| 'doc_scores': [80.42758 ...],
| 'doc_probabilities': [40.71379089355469, ...
| 'content': ['Albert Einstein was a ...]
| 'titles': ['"Albert Einstein"', ...]
| }}]}
Seq2SeqGenerator
class Seq2SeqGenerator(BaseGenerator)
A generic sequence-to-sequence generator based on HuggingFace's transformers.
Text generation is supported by so called auto-regressive language models like GPT2, XLNet, XLM, Bart, T5 and others. In fact, any HuggingFace language model that extends GenerationMixin can be used by Seq2SeqGenerator.
Moreover, as language models prepare model input in their specific encoding, each model specified with model_name_or_path parameter in this Seq2SeqGenerator should have an accompanying model input converter that takes care of prefixes, separator tokens etc. By default, we provide model input converters for a few well-known seq2seq language models (e.g. ELI5). It is the responsibility of Seq2SeqGenerator user to ensure an appropriate model input converter is either already registered or specified on a per-model basis in the Seq2SeqGenerator constructor.
For mode details on custom model input converters refer to _BartEli5Converter
See https://huggingface.co/transformers/main_classes/model.html?transformers.generation_utils.GenerationMixin#transformers.generation_utils.GenerationMixin as well as https://huggingface.co/blog/how-to-generate
For a list of all text-generation models see https://huggingface.co/models?pipeline_tag=text-generation
Example
| query = "Why is Dothraki language important?"
|
| # Retrieve related documents from retriever
| retrieved_docs = retriever.retrieve(query=query)
|
| # Now generate answer from query and retrieved documents
| generator.predict(
| query=query,
| documents=retrieved_docs,
| top_k=1
| )
|
| # Answer
|
| {'query': 'who got the first nobel prize in physics',
| 'answers':
| [{'query': 'who got the first nobel prize in physics',
| 'answer': ' albert einstein',
| 'meta': { 'doc_ids': [...],
| 'doc_scores': [80.42758 ...],
| 'doc_probabilities': [40.71379089355469, ...
| 'content': ['Albert Einstein was a ...]
| 'titles': ['"Albert Einstein"', ...]
| }}]}
__init__
def __init__(model_name_or_path: str, input_converter: Optional[Callable] = None, top_k: int = 1, max_length: int = 200, min_length: int = 2, num_beams: int = 8, use_gpu: bool = True)
Arguments:
model_name_or_path: a HF model name for auto-regressive language model like GPT2, XLNet, XLM, Bart, T5 etcinput_converter: an optional Callable to prepare model input for the underlying language model specified in model_name_or_path parameter. The required call method signature for the Callable is: call(tokenizer: PreTrainedTokenizer, query: str, documents: List[Document], top_k: Optional[int] = None) -> BatchEncoding:top_k: Number of independently generated text to returnmax_length: Maximum length of generated textmin_length: Minimum length of generated textnum_beams: Number of beams for beam search. 1 means no beam search.use_gpu: Whether to use GPU or the CPU. Falls back on CPU if no GPU is available.
predict
def predict(query: str, documents: List[Document], top_k: Optional[int] = None) -> Dict
Generate the answer to the input query. The generation will be conditioned on the supplied documents.
These document can be retrieved via the Retriever or supplied directly via predict method.
Arguments:
query: Querydocuments: Related documents (e.g. coming from a retriever) that the answer shall be conditioned on.top_k: Number of returned answers
Returns:
Generated answers