* feat: fetch results for DeepsetCloudExperiments
* chore: test DC fetch predicitons for eval run
* chore: switch to dict iteration with .items()
* chore: update DC url to fetch predictions from
* chore: update doc strings for fetching eval run results
* chore: update DeepsetCloudExperiments description, change function names for fetching predictions of an eval run
* chore: test for DeepsetCloudExperiments.get_run_results
* chore: adjust request mock for test_get_eval_run_results
* chore: push first row of dataframe into variable for test checks
* chore: adjust mock data to correct data types
* chore: make documentation more readable with line breaks
* chore: update documentation for eval run result fetching
* Do not show success message on failed evalset upload
* Update Documentation & Code Style
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* Add possibility to upload evaluation sets to DC
* fix test_eval sas comparisons
* quickwin docstring feedback changes
* Add hint about annotation tool and mark optional and required columns
* minor changes to docstrings