11 Commits

Author SHA1 Message Date
Madeesh Kannan
672bcf7e03
fix: Add constraints to set_input_type(s) based on run method (#8358)
* fix: Prevent the usage of `set_input_type(s)` when the `run` method doesn't have kwargs,
raise if `set_input_type(s)` overrides `run` method parameters

* fix: update components and tests

* reno
2024-09-12 15:58:16 +02:00
David S. Batista
0c9dc008f0
fix: improve context relevancy metric (#7964)
* fixing tests

* fixing tests

* updating tests

* updating tests

* updating docstring

* adding release notes

* making the insufficient information more robust

* updating docstring and release notes

* empty list instead of informative string

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* fixing tests

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>

* reverting commit

* reverting again commit

* fixing docstrings

* removing deprecation warning

* removing warning import

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>
2024-07-22 15:13:46 +02:00
Ulises M
6f8834d036
feat: add and expose api_params for OpenAIGenerator in LLMEvaluator based classes (#7987)
* initial support for api_params

* add tests and reno

* resolve suggestions and add integration test

* fix mypy

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
2024-07-11 13:14:03 +02:00
David S. Batista
186512459d
feat: LLM-based evaluators return meta info from OpenAI (#7947)
* LLM-Evaluator returns metadata from OpenAI

* adding tests

* adding release notes

* updating test

* updating release notes

* fixing live tests

* attending PR comments

* fixing tests

* Update releasenotes/notes/adding-metadata-info-from-OpenAI-f5309af5f59bb6a7.yaml

Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>

* Update llm_evaluator.py

---------

Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>
2024-07-02 11:31:51 +02:00
David S. Batista
8b9eddcd94
fix: explicitly tell ContextRelevanceEvaluator that each statement should be scored (#7904)
* initial import

* adding release notes

* adding pytest decorator for live test

* make examples more readable

* updating tests

* reverting progress_bar = False
2024-06-25 16:59:37 +02:00
Madeesh Kannan
fe60eedee9
fix: Fix deserialization of pipelines that contain LLMEvaluator subclasses (#7891) 2024-06-19 13:47:38 +02:00
David S. Batista
38747ff7a3
fix: failsafe for non-valid json and failed LLM calls (#7723)
* wip

* initial import

* adding tests

* adding params

* adding safeguards for nan in evaluators

* adding docstrings

* fixing tests

* removing unused imports

* adding tests to context and faithfullness evaluators

* fixing docstrings

* nit

* removing unused imports

* adding release notes

* attending PR comments

* fixing tests

* fixing tests

* adding types

* removing unused imports

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update haystack/components/evaluators/faithfulness.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* attending PR comments

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
2024-05-23 15:41:29 +00:00
David S. Batista
798dc4a4a5
fix: avoid FaithfulnessEvaluator and ContextRelevanceEvaluator return Nan (#7685)
* initial import

* fixing tests

* relaxing condition

* adding safeguard for ContextRelevanceEvaluator as well

* adding release notes
2024-05-14 17:08:51 +02:00
Massimiliano Pippi
10c675d534
chore: add license header to all modules (#7675)
* add license header to modules
* check license header at linting time
2024-05-09 13:40:36 +00:00
Julian Risch
9c56dbe288
test: Make ContextRelevanceEvaluator integration test more robust (#7584) 2024-04-23 16:01:25 +00:00
Julian Risch
b12e0db134
feat: Add ContextRelevanceEvaluator component (#7519)
* feat: Add ContextRelevanceEvaluator component

* reno

* fix expected inputs and example docstring

* remove responses parameter from tests

* specify inputs explicitly

* add new evaluator to api reference docs
2024-04-22 14:10:00 +02:00