haystack/test/test_modeling_processor.py
Sara Zan 11cf94a965
Pipeline's YAML: syntax validation (#2226)
* Add BasePipeline.validate_config, BasePipeline.validate_yaml, and some new custom exception classes

* Make error composition work properly

* Clarify typing

* Help mypy a bit more

* Update Documentation & Code Style

* Enable autogenerated docs for Milvus1 and 2 separately

* Revert "Enable autogenerated docs for Milvus1 and 2 separately"

This reverts commit 282be4a78a6e95862a9b4c924fc3dea5ca71e28d.

* Update Documentation & Code Style

* Re-enable 'additionalProperties: False'

* Add pipeline.type to JSON Schema, was somehow forgotten

* Disable additionalProperties on the pipeline properties too

* Fix json-schemas for 1.1.0 and 1.2.0 (should not do it again in the future)

* Cal super in PipelineValidationError

* Improve _read_pipeline_config_from_yaml's error handling

* Fix generate_json_schema.py to include document stores

* Fix json schemas (retro-fix 1.1.0 again)

* Improve custom errors printing, add link to docs

* Add function in BaseComponent to list its subclasses in a module

* Make some document stores base classes abstract

* Add marker 'integration' in pytest flags

* Slighly improve validation of pipelines at load

* Adding tests for YAML loading and validation

* Make custom_query Optional for validation issues

* Fix bug in _read_pipeline_config_from_yaml

* Improve error handling in BasePipeline and Pipeline and add DAG check

* Move json schema generation into haystack/nodes/_json_schema.py (useful for tests)

* Simplify errors slightly

* Add some YAML validation tests

* Remove load_from_config from BasePipeline, it was never used anyway

* Improve tests

* Include json-schemas in package

* Fix conftest imports

* Make BasePipeline abstract

* Improve mocking by making the test independent from the YAML version

* Add exportable_to_yaml decorator to forget about set_config on mock nodes

* Fix mypy errors

* Comment out one monkeypatch

* Fix typing again

* Improve error message for validation

* Add required properties to pipelines

* Fix YAML version for REST API YAMLs to 1.2.0

* Fix load_from_yaml call in load_from_deepset_cloud

* fix HaystackError.__getattr__

* Add super().__init__()in most nodes and docstore, comment set_config

* Remove type from REST API pipelines

* Remove useless init from doc2answers

* Call super in Seq3SeqGenerator

* Typo in deepsetcloud.py

* Fix rest api indexing error mismatch and mock version of JSON schema in all tests

* Working on pipeline tests

* Improve errors printing slightly

* Add back test_pipeline.yaml

* _json_schema.py supports different versions with identical schemas

* Add type to 0.7 schema for backwards compatibility

* Fix small bug in _json_schema.py

* Try alternative to generate json schemas on the CI

* Update Documentation & Code Style

* Make linux CI match autoformat CI

* Fix super-init-not-called

* Accidentally committed file

* Update Documentation & Code Style

* fix test_summarizer_translation.py's import

* Mock YAML in a few suites, split and simplify test_pipeline_debug_and_validation.py::test_invalid_run_args

* Fix json schema for ray tests too

* Update Documentation & Code Style

* Reintroduce validation

* Usa unstable version in tests and rest api

* Make unstable support the latest versions

* Update Documentation & Code Style

* Remove needless fixture

* Make type in pipeline optional in the strings validation

* Fix schemas

* Fix string validation for pipeline type

* Improve validate_config_strings

* Remove type from test p[ipelines

* Update Documentation & Code Style

* Fix test_pipeline

* Removing more type from pipelines

* Temporary CI patc

* Fix issue with exportable_to_yaml never invoking the wrapped init

* rm stray file

* pipeline tests are green again

* Linux CI now needs .[all] to generate the schema

* Bugfixes, pipeline tests seems to be green

* Typo in version after merge

* Implement missing methods in Weaviate

* Trying to avoid FAISS tests from running in the Milvus1 test suite

* Fix some stray test paths and faiss index dumping

* Fix pytest markers list

* Temporarily disable cache to be able to see tests failures

* Fix pyproject.toml syntax

* Use only tmp_path

* Fix preprocessor signature after merge

* Fix faiss bug

* Fix Ray test

* Fix documentation issue by removing quotes from faiss type

* Update Documentation & Code Style

* use document properly in preprocessor tests

* Update Documentation & Code Style

* make preprocessor capable of handling documents

* import document

* Revert support for documents in preprocessor, do later

* Fix bug in _json_schema.py that was breaking validation

* re-enable cache

* Update Documentation & Code Style

* Simplify calling _json_schema.py from the CI

* Remove redundant ABC inheritance

* Ensure exportable_to_yaml works only on implementations

* Rename subclass to class_ in Meta

* Make run() and get_config() abstract in BasePipeline

* Revert unintended change in preprocessor

* Move outgoing_edges_input_node check inside try block

* Rename VALID_CODE_GEN_INPUT_REGEX into VALID_INPUT_REGEX

* Add check for a RecursionError on validate_config_strings

* Address usages of _pipeline_config in data silo and elasticsearch

* Rename _pipeline_config into _init_parameters

* Fix pytest marker and remove unused imports

* Remove most redundant ABCs

* Rename _init_parameters into _component_configuration

* Remove set_config and type from _component_configuration's dict

* Remove last instances of set_config and replace with super().__init__()

* Implement __init_subclass__ approach

* Simplify checks on the existence of _component_configuration

* Fix faiss issue

* Dynamic generation of node schemas & weed out old schemas

* Add debatable test

* Add docstring to debatable test

* Positive diff between schemas implemented

* Improve diff printing

* Rename REST API YAML files to trigger IDE validation

* Fix typing issues

* Fix more typing

* Typo in YAML filename

* Remove needless type:ignore

* Add tests

* Fix tests & validation feedback for accessory classes in custom nodes

* Refactor RAGeneratorType out

* Fix broken import in conftest

* Improve source error handling

* Remove unused import in test_eval.py breaking tests

* Fix changed error message in tests matches too

* Normalize generate_openapi_specs.py and generate_json_schema.py in the actions

* Fix path to generate_openapi_specs.py in autoformat.yml

* Update Documentation & Code Style

* Add test for FAISSDocumentStore-like situations (superclass with init params)

* Update Documentation & Code Style

* Fix indentation

* Remove commented set_config

* Store model_name_or_path in FARMReader to use in DistillationDataSilo

* Rename _component_configuration into _component_config

* Update Documentation & Code Style

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2022-03-15 11:17:26 +01:00

307 lines
12 KiB
Python

import logging
from pathlib import Path
from transformers import AutoTokenizer
from haystack.modeling.data_handler.processor import SquadProcessor
from haystack.modeling.model.tokenization import Tokenizer
from .conftest import SAMPLES_PATH
# during inference (parameter return_baskets = False) we do not convert labels
def test_dataset_from_dicts_qa_inference(caplog=None):
if caplog:
caplog.set_level(logging.CRITICAL)
models = [
"deepset/roberta-base-squad2",
"deepset/bert-base-cased-squad2",
"deepset/xlm-roberta-large-squad2",
"deepset/minilm-uncased-squad2",
"deepset/electra-base-squad2",
]
sample_types = ["answer-wrong", "answer-offset-wrong", "noanswer", "vanilla"]
for model in models:
tokenizer = Tokenizer.load(pretrained_model_name_or_path=model, use_fast=True)
processor = SquadProcessor(tokenizer, max_seq_len=256, data_dir=None)
for sample_type in sample_types:
dicts = processor.file_to_dicts(SAMPLES_PATH / "qa" / f"{sample_type}.json")
dataset, tensor_names, problematic_sample_ids, baskets = processor.dataset_from_dicts(
dicts, indices=[1], return_baskets=True
)
assert tensor_names == [
"input_ids",
"padding_mask",
"segment_ids",
"passage_start_t",
"start_of_word",
"labels",
"id",
"seq_2_start_t",
"span_mask",
], f"Processing for {model} has changed."
assert len(problematic_sample_ids) == 0, f"Processing for {model} has changed."
assert baskets[0].id_external == "5ad3d560604f3c001a3ff2c8", f"Processing for {model} has changed."
assert baskets[0].id_internal == "1-0", f"Processing for {model} has changed."
# roberta
if model == "deepset/roberta-base-squad2":
assert (
len(baskets[0].samples[0].tokenized["passage_tokens"]) == 6
), f"Processing for {model} has changed."
assert (
len(baskets[0].samples[0].tokenized["question_tokens"]) == 7
), f"Processing for {model} has changed."
if sample_type == "noanswer":
assert baskets[0].samples[0].features[0]["input_ids"][:13] == [
0,
6179,
171,
82,
697,
11,
2201,
116,
2,
2,
26795,
2614,
34,
], f"Processing for {model} and {sample_type}-testsample has changed."
else:
assert baskets[0].samples[0].features[0]["input_ids"][:13] == [
0,
6179,
171,
82,
697,
11,
5459,
116,
2,
2,
26795,
2614,
34,
], f"Processing for {model} and {sample_type}-testsample has changed."
# bert
if model == "deepset/bert-base-cased-squad2":
assert (
len(baskets[0].samples[0].tokenized["passage_tokens"]) == 5
), f"Processing for {model} has changed."
assert (
len(baskets[0].samples[0].tokenized["question_tokens"]) == 7
), f"Processing for {model} has changed."
if sample_type == "noanswer":
assert baskets[0].samples[0].features[0]["input_ids"][:10] == [
101,
1731,
1242,
1234,
1686,
1107,
2123,
136,
102,
3206,
], f"Processing for {model} and {sample_type}-testsample has changed."
else:
assert baskets[0].samples[0].features[0]["input_ids"][:10] == [
101,
1731,
1242,
1234,
1686,
1107,
3206,
136,
102,
3206,
], f"Processing for {model} and {sample_type}-testsample has changed."
# xlm-roberta
if model == "deepset/xlm-roberta-large-squad2":
assert (
len(baskets[0].samples[0].tokenized["passage_tokens"]) == 7
), f"Processing for {model} has changed."
assert (
len(baskets[0].samples[0].tokenized["question_tokens"]) == 7
), f"Processing for {model} has changed."
if sample_type == "noanswer":
assert baskets[0].samples[0].features[0]["input_ids"][:12] == [
0,
11249,
5941,
3395,
6867,
23,
7270,
32,
2,
2,
10271,
1556,
], f"Processing for {model} and {sample_type}-testsample has changed."
else:
assert baskets[0].samples[0].features[0]["input_ids"][:12] == [
0,
11249,
5941,
3395,
6867,
23,
10271,
32,
2,
2,
10271,
1556,
], f"Processing for {model} and {sample_type}-testsample has changed."
# minilm and electra have same vocab + tokenizer
if model == "deepset/minilm-uncased-squad2" or model == "deepset/electra-base-squad2":
assert (
len(baskets[0].samples[0].tokenized["passage_tokens"]) == 5
), f"Processing for {model} has changed."
assert (
len(baskets[0].samples[0].tokenized["question_tokens"]) == 7
), f"Processing for {model} has changed."
if sample_type == "noanswer":
assert baskets[0].samples[0].features[0]["input_ids"][:10] == [
101,
2129,
2116,
2111,
2444,
1999,
3000,
1029,
102,
4068,
], f"Processing for {model} and {sample_type}-testsample has changed."
else:
assert baskets[0].samples[0].features[0]["input_ids"][:10] == [
101,
2129,
2116,
2111,
2444,
1999,
4068,
1029,
102,
4068,
], f"Processing for {model} and {sample_type}-testsample has changed."
def test_batch_encoding_flatten_rename():
from haystack.modeling.data_handler.dataset import flatten_rename
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
batch_sentences = ["Hello I'm a single sentence", "And another sentence", "And the very very last one"]
encoded_inputs = tokenizer(batch_sentences, padding=True, truncation=True)
keys = ["input_ids", "token_type_ids", "attention_mask"]
rename_keys = ["input_ids", "segment_ids", "padding_mask"]
features_flat = flatten_rename(encoded_inputs, keys, rename_keys)
assert len(features_flat) == 3, "should have three elements in the feature dict list"
for e in features_flat:
for k in rename_keys:
assert k in e, f"feature dict list item {e} in a list should have a key {k}"
# rename no keys/rename keys
features_flat = flatten_rename(encoded_inputs)
assert len(features_flat) == 3, "should have three elements in the feature dict list"
for e in features_flat:
for k in keys:
assert k in e, f"feature dict list item {e} in a list should have a key {k}"
# empty input keys
flatten_rename(encoded_inputs, [])
# empty keys and rename keys
flatten_rename(encoded_inputs, [], [])
# no encoding_batch provided
flatten_rename(None, [], [])
# keys and renamed_keys have different sizes
try:
flatten_rename(encoded_inputs, [], ["blah"])
except AssertionError:
pass
def test_dataset_from_dicts_qa_labelconversion(caplog=None):
if caplog:
caplog.set_level(logging.CRITICAL)
models = [
"deepset/roberta-base-squad2",
"deepset/bert-base-cased-squad2",
"deepset/xlm-roberta-large-squad2",
"deepset/minilm-uncased-squad2",
"deepset/electra-base-squad2",
]
sample_types = ["answer-wrong", "answer-offset-wrong", "noanswer", "vanilla"]
for model in models:
tokenizer = Tokenizer.load(pretrained_model_name_or_path=model, use_fast=True)
processor = SquadProcessor(tokenizer, max_seq_len=256, data_dir=None)
for sample_type in sample_types:
dicts = processor.file_to_dicts(SAMPLES_PATH / "qa" / f"{sample_type}.json")
dataset, tensor_names, problematic_sample_ids = processor.dataset_from_dicts(
dicts, indices=[1], return_baskets=False
)
if sample_type == "answer-wrong" or sample_type == "answer-offset-wrong":
assert len(problematic_sample_ids) == 1, f"Processing labels for {model} has changed."
if sample_type == "noanswer":
assert list(dataset.tensors[tensor_names.index("labels")].numpy()[0, 0, :]) == [
0,
0,
], f"Processing labels for {model} has changed."
assert list(dataset.tensors[tensor_names.index("labels")].numpy()[0, 1, :]) == [
-1,
-1,
], f"Processing labels for {model} has changed."
if sample_type == "vanilla":
# roberta
if model == "deepset/roberta-base-squad2":
assert list(dataset.tensors[tensor_names.index("labels")].numpy()[0, 0, :]) == [
13,
13,
], f"Processing labels for {model} has changed."
assert list(dataset.tensors[tensor_names.index("labels")].numpy()[0, 1, :]) == [
13,
14,
], f"Processing labels for {model} has changed."
# bert, minilm, electra
if (
model == "deepset/bert-base-cased-squad2"
or model == "deepset/minilm-uncased-squad2"
or model == "deepset/electra-base-squad2"
):
assert list(dataset.tensors[tensor_names.index("labels")].numpy()[0, 0, :]) == [
11,
11,
], f"Processing labels for {model} has changed."
# xlm-roberta
if model == "deepset/xlm-roberta-large-squad2":
assert list(dataset.tensors[tensor_names.index("labels")].numpy()[0, 0, :]) == [
12,
12,
], f"Processing labels for {model} has changed."
if __name__ == "__main__":
test_dataset_from_dicts_qa_labelconversion()