* #4320 implemented dynamic max_answers for SquadProcessor, fixed IndexError when max_answers is less than the number of answers in the dataset
* #4320 added two unit tests for dataset_from_dicts testing default and manual max_answers
* apply suggestions from code review
Co-authored-by: bogdankostic <bogdankostic@web.de>
* simplify comment, fix mypy & pylint errors, fix old test
* adjust max_answers to each dataset individually
---------
Co-authored-by: bogdankostic <bogdankostic@web.de>
* fix json serialization
* add missing markers
* pylint
* fix decoder bug
* pylint
* add some more tests
* linting & windows
* windows
* windows
* windows paths again
* Add step to loook up tokenizers by prefix in openai_utils
* Updated tiktoken min version + openai_utils test
* Added test case for GPT-4 and Azure model naming
* Broken down tests
* Added default case
---------
Co-authored-by: ZanSara <sara.zanzottera@deepset.ai>
* add conversion script
* run job in CI
* typo
* invoke python
* install toml
* fix pylint error
* more exclusions
* add toml to dev dependencies
* fix exclusions list
* fix mypy and remove test clause
* try
* add exclusions
* fix vanilla distribution
* use different requirements files
* fix comments and file name
* try with a recent version of pip
* use cpu version of torch
* try
* again
* exclude nvidia libraries
* revert old change
* send report to FOSSA
* add gpu section
* display job names
* remove FOSSA check
* send complete report to FOSSA
* removed FIXME
* Updated text_label tests to match tabel_label tests. Also added answer text as part of the Answer.__eq__ comparison.
* Updated text document unit tests to match ones from table docs
* Converting text answer unit tests to match table answer
* Update some document tests
* Minor update
* Separating unit tests
* preserve root_node and add tests
* Added if statement to fix failing tests
---------
Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>
Co-authored-by: Sebastian Husch Lee <sjrl423@gmail.com>
* Deprecate name parameter
* Adapt existing tests and uses of PromptTemplate
* Move parameter `name` to end
* Adapt existing tests
* lg update
---------
Co-authored-by: Darja Fokina <daria.f93@gmail.com>
* Add support for dicts to Weaviate
* Add support for _split_overlap to Pinecone
* Add tests
* Fix Pylint
* Fix Pylint
* Fix test
* Implement PR feedback
* Extract ToolsManager, add it to Agent by the composition
* PR feedback Massi
---------
Co-authored-by: Massimiliano Pippi <mpippi@gmail.com>
Co-authored-by: Darja Fokina <daria.f93@gmail.com>
* Adding support for table Documents when serializing Labels in Haystack
* Fix table label equality test
* Add serialization support and __eq__ support for table answers
* Made convenience functions for converting dataframes. Added some TODOs. Epxanded schema tests for table labels. Updated Multilabel to not convert Dataframes into strings.
* get Answer and Label to_json working with DataFrame
* Fix from_dict method of Label
* Use Dict and remove unneccessary if check
* Using pydantic instead of builtins for type detection
* Update haystack/schema.py
Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>
* Update haystack/schema.py
Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>
* Update haystack/schema.py
Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>
* Separated table label equivalency tests and added pytest.mark.unit
* Added unit test for _dict_factory
* Using more descriptive variable names
* Adding json files to test to_json and from_json functions
* Added sample files for tests
---------
Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>
If using the local model in pipeline YAML. The PromptModel cannot select
the HFLocalInvocationLayer, because of the get_task cannot support the
offline model.
*Local model usage:
add the task_name parameter in model_kwargs for local model. for
example text-generation or text2text-generation.
- name: PModel
type: PromptModel
params:
model_name_or_path: /local_model_path
model_kwargs:
task_name: text-generation
- name: Prompter
params:
model_name_or_path: PModel
default_prompt_template: question-answering
type: PromptNode
Signed-off-by: yuanwu <yuan.wu@intel.com>
* fixed test base for hub 0.13.3
* check if test succeed from branch
* 2nd check if test succeed from branch
* removed dependency changes
---------
Co-authored-by: Massimiliano Pippi <mpippi@gmail.com>