4 Commits

Author SHA1 Message Date
David S. Batista
55513f7521
feat: EvaluationRunResult add parameter to specify columns to keep in the comparative Dataframe (#7879)
* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* updating tests

* adding release notes

* Update haystack/evaluation/eval_run_result.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update releasenotes/notes/add-keep-columns-to-EvalRunResult-comparative-be3e15ce45de3e0b.yaml

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* updating docstring

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
2024-06-17 18:08:52 +02:00
David S. Batista
ce9b0ecb19
fix: EvaluationRunResult.score_report() is missing the metrics column (#7817)
* fixing the DataFrame with the aggregated scores

* fixing tests
2024-06-06 14:33:45 +02:00
Massimiliano Pippi
10c675d534
chore: add license header to all modules (#7675)
* add license header to modules
* check license header at linting time
2024-05-09 13:40:36 +00:00
Madeesh Kannan
a881451d3a
refactor: Refactor EvaluationResult into BaseEvaluationRunResult and EvaluationRunResult (#7594)
The new `EvaluationRunResult` has slightly different semantics - it separates the previous `data` parameter into `inputs` and `results`and expects aggregate scores to be provided in the latter.
2024-04-25 12:16:48 +02:00