* Add unit test mark for appropriate tests
* Remove deepset Cloud specific tests
* Create pytest fixtures
* Reduce number of checks run for test_match_context_multi_process and test_match_context_single_process
* Increase speed of test_match_contexts_multi_process
* Revert "Remove deepset Cloud specific tests"
This reverts commit b65173665f3e873f17f3613c5fd4fa3174a6d71b.
* Continuing revert commit
* Remove unnecessary comment
* Break down bigger test into smaller tests
* Use urlparse to get file extension for urls that contain text after the file extension such as query parameters
* Run pre-commit to fix format
* Reformat import_utils
* Document get_filename_extension_from_url
* Formatting
* Formatting
* Update haystack/utils/import_utils.py
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
* Update haystack/utils/import_utils.py
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
---------
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
* fix: Fix `print_answers` for output of query `run_batch` (#4255)
* fix: print "Answers" label even with no query list
Co-authored-by: Massimiliano Pippi <mpippi@gmail.com>
* test: add unit tests for `print_answers` on `run`, `run_batch` output (#4255)
---------
Co-authored-by: Massimiliano Pippi <mpippi@gmail.com>
* refractor the to_squad data class
* fix the validation label
* refractor the to_squad data class
* fix the validation label
* add the test for the to_label object function
* fix the tests for to_label_objects
* move all the test related to squad data to one file
* remove unused imports
* revert tiny_augmented.json
Co-authored-by: ZanSara <sarazanzo94@gmail.com>
* feat: fetch results for DeepsetCloudExperiments
* chore: test DC fetch predicitons for eval run
* chore: switch to dict iteration with .items()
* chore: update DC url to fetch predictions from
* chore: update doc strings for fetching eval run results
* chore: update DeepsetCloudExperiments description, change function names for fetching predictions of an eval run
* chore: test for DeepsetCloudExperiments.get_run_results
* chore: adjust request mock for test_get_eval_run_results
* chore: push first row of dataframe into variable for test checks
* chore: adjust mock data to correct data types
* chore: make documentation more readable with line breaks
* chore: update documentation for eval run result fetching
* Do not show success message on failed evalset upload
* Update Documentation & Code Style
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* Add possibility to upload evaluation sets to DC
* fix test_eval sas comparisons
* quickwin docstring feedback changes
* Add hint about annotation tool and mark optional and required columns
* minor changes to docstrings