17 Commits

Author SHA1 Message Date
David S. Batista
da60156174
chore: removing unused imports from tests (#9446) 2025-05-26 16:22:51 +00:00
Stefano Fiorucci
656fe6dc6e
chore: LLM Evaluators - remove deprecated parameters (#9219) 2025-04-15 09:26:31 +02:00
Stefano Fiorucci
c6df8d2c7a
test: monkeypatch OpenAI API key in some unit tests (#9173) 2025-04-04 13:33:22 +02:00
Stefano Fiorucci
adc3dfc5d2
refactor: LLM evaluators - introduce chat_generator init param; deprecate api, api_key and api_params (#9122)
* start

* progress

* tests for deserialize_chatgenerator_inplace

* progress on llmevaluator + tests

* update context relevance evaluator

* update faithfulness evaluator + tests

* release note

* rm unused import

* rm indentation
2025-03-31 15:35:03 +02:00
Stefano Fiorucci
e4cf460bf6
refactor!: use Chat Generator in LLM evaluators (#9116)
* use chatgenerator instead of generator

* rename generator to _chat_generator

* rm print

* Update releasenotes/notes/llm-evaluators-chat-generator-bf930fa6db019714.yaml

Co-authored-by: David S. Batista <dsbatista@gmail.com>

---------

Co-authored-by: David S. Batista <dsbatista@gmail.com>
2025-03-26 15:38:56 +01:00
Sebastian Husch Lee
8cafcddb00
chore: Remove print statements from tests and mention of old name (#8883)
* Remove print statements from tests

* Remove mention of Canals

* Remove another mention
2025-02-20 10:24:26 +01:00
Madeesh Kannan
672bcf7e03
fix: Add constraints to set_input_type(s) based on run method (#8358)
* fix: Prevent the usage of `set_input_type(s)` when the `run` method doesn't have kwargs,
raise if `set_input_type(s)` overrides `run` method parameters

* fix: update components and tests

* reno
2024-09-12 15:58:16 +02:00
David S. Batista
0c9dc008f0
fix: improve context relevancy metric (#7964)
* fixing tests

* fixing tests

* updating tests

* updating tests

* updating docstring

* adding release notes

* making the insufficient information more robust

* updating docstring and release notes

* empty list instead of informative string

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* fixing tests

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>

* reverting commit

* reverting again commit

* fixing docstrings

* removing deprecation warning

* removing warning import

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>
2024-07-22 15:13:46 +02:00
Ulises M
6f8834d036
feat: add and expose api_params for OpenAIGenerator in LLMEvaluator based classes (#7987)
* initial support for api_params

* add tests and reno

* resolve suggestions and add integration test

* fix mypy

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
2024-07-11 13:14:03 +02:00
David S. Batista
186512459d
feat: LLM-based evaluators return meta info from OpenAI (#7947)
* LLM-Evaluator returns metadata from OpenAI

* adding tests

* adding release notes

* updating test

* updating release notes

* fixing live tests

* attending PR comments

* fixing tests

* Update releasenotes/notes/adding-metadata-info-from-OpenAI-f5309af5f59bb6a7.yaml

Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>

* Update llm_evaluator.py

---------

Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>
2024-07-02 11:31:51 +02:00
David S. Batista
8b9eddcd94
fix: explicitly tell ContextRelevanceEvaluator that each statement should be scored (#7904)
* initial import

* adding release notes

* adding pytest decorator for live test

* make examples more readable

* updating tests

* reverting progress_bar = False
2024-06-25 16:59:37 +02:00
Madeesh Kannan
fe60eedee9
fix: Fix deserialization of pipelines that contain LLMEvaluator subclasses (#7891) 2024-06-19 13:47:38 +02:00
David S. Batista
38747ff7a3
fix: failsafe for non-valid json and failed LLM calls (#7723)
* wip

* initial import

* adding tests

* adding params

* adding safeguards for nan in evaluators

* adding docstrings

* fixing tests

* removing unused imports

* adding tests to context and faithfullness evaluators

* fixing docstrings

* nit

* removing unused imports

* adding release notes

* attending PR comments

* fixing tests

* fixing tests

* adding types

* removing unused imports

* Update haystack/components/evaluators/context_relevance.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update haystack/components/evaluators/faithfulness.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* attending PR comments

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
2024-05-23 15:41:29 +00:00
David S. Batista
798dc4a4a5
fix: avoid FaithfulnessEvaluator and ContextRelevanceEvaluator return Nan (#7685)
* initial import

* fixing tests

* relaxing condition

* adding safeguard for ContextRelevanceEvaluator as well

* adding release notes
2024-05-14 17:08:51 +02:00
Massimiliano Pippi
10c675d534
chore: add license header to all modules (#7675)
* add license header to modules
* check license header at linting time
2024-05-09 13:40:36 +00:00
Julian Risch
9c56dbe288
test: Make ContextRelevanceEvaluator integration test more robust (#7584) 2024-04-23 16:01:25 +00:00
Julian Risch
b12e0db134
feat: Add ContextRelevanceEvaluator component (#7519)
* feat: Add ContextRelevanceEvaluator component

* reno

* fix expected inputs and example docstring

* remove responses parameter from tests

* specify inputs explicitly

* add new evaluator to api reference docs
2024-04-22 14:10:00 +02:00