Sebastian Husch Lee
85258f0654
fix: Fix types and formatting pipeline test_run.py ( #9575 )
...
* Fix types in test_run.py
* Get test_run.py to pass fmt-check
* Add test_run to mypy checks
* Update test folder to pass ruff linting
* Fix merge
* Fix HF tests
* Fix hf test
* Try to fix tests
* Another attempt
* minor fix
* fix SentenceTransformersDiversityRanker
* skip integrations tests due to model unavailable on HF inference
---------
Co-authored-by: anakin87 <stefanofiorucci@gmail.com>
2025-07-03 09:49:09 +02:00
David S. Batista
da60156174
chore: removing unused imports from tests ( #9446 )
2025-05-26 16:22:51 +00:00
Stefano Fiorucci
656fe6dc6e
chore: LLM Evaluators - remove deprecated parameters ( #9219 )
2025-04-15 09:26:31 +02:00
Stefano Fiorucci
adc3dfc5d2
refactor: LLM evaluators - introduce chat_generator
init param; deprecate api
, api_key
and api_params
( #9122 )
...
* start
* progress
* tests for deserialize_chatgenerator_inplace
* progress on llmevaluator + tests
* update context relevance evaluator
* update faithfulness evaluator + tests
* release note
* rm unused import
* rm indentation
2025-03-31 15:35:03 +02:00
Stefano Fiorucci
e4cf460bf6
refactor!: use Chat Generator in LLM evaluators ( #9116 )
...
* use chatgenerator instead of generator
* rename generator to _chat_generator
* rm print
* Update releasenotes/notes/llm-evaluators-chat-generator-bf930fa6db019714.yaml
Co-authored-by: David S. Batista <dsbatista@gmail.com>
---------
Co-authored-by: David S. Batista <dsbatista@gmail.com>
2025-03-26 15:38:56 +01:00
Sriniketh J
066e2e3ec5
Make api_key param optional in LLMEvaluator ( #8340 )
2024-09-20 10:47:13 +02:00
Ulises M
6f8834d036
feat: add and expose api_params for OpenAIGenerator in LLMEvaluator based classes ( #7987 )
...
* initial support for api_params
* add tests and reno
* resolve suggestions and add integration test
* fix mypy
---------
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
2024-07-11 13:14:03 +02:00
David S. Batista
186512459d
feat: LLM-based evaluators return meta info from OpenAI ( #7947 )
...
* LLM-Evaluator returns metadata from OpenAI
* adding tests
* adding release notes
* updating test
* updating release notes
* fixing live tests
* attending PR comments
* fixing tests
* Update releasenotes/notes/adding-metadata-info-from-OpenAI-f5309af5f59bb6a7.yaml
Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>
* Update llm_evaluator.py
---------
Co-authored-by: Stefano Fiorucci <stefanofiorucci@gmail.com>
2024-07-02 11:31:51 +02:00
Madeesh Kannan
63226dad34
fix: Fix LLMEvaluator
serialization ( #7818 )
...
* fix: Fix `LLMEvaluator` serialization
* `reno`
2024-06-07 12:49:23 +02:00
David S. Batista
38747ff7a3
fix: failsafe for non-valid json and failed LLM calls ( #7723 )
...
* wip
* initial import
* adding tests
* adding params
* adding safeguards for nan in evaluators
* adding docstrings
* fixing tests
* removing unused imports
* adding tests to context and faithfullness evaluators
* fixing docstrings
* nit
* removing unused imports
* adding release notes
* attending PR comments
* fixing tests
* fixing tests
* adding types
* removing unused imports
* Update haystack/components/evaluators/context_relevance.py
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Update haystack/components/evaluators/faithfulness.py
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* attending PR comments
---------
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
2024-05-23 15:41:29 +00:00
David S. Batista
a4fc2b66e6
style: adding progress bar to llm-based evaluators ( #7726 )
...
* adding progress bar
* fixing typo
* fixing tests
* Update test_llm_evaluator.py
* fixing missing colon
* passing directly to parent
* adding docstrings
2024-05-23 09:22:14 +02:00
Massimiliano Pippi
10c675d534
chore: add license header to all modules ( #7675 )
...
* add license header to modules
* check license header at linting time
2024-05-09 13:40:36 +00:00
Julian Risch
2509eeea7e
refactor: Rename FaithfulnessEvaluator input responses to predicted_answers ( #7621 )
2024-04-30 16:30:57 +02:00
Julian Risch
8ef6062748
refactor: Remove name 'llm' from LLMEvaluator output ( #7479 )
2024-04-04 15:19:30 +00:00
Julian Risch
bfd0d3eacd
feat: Add new LLMEvaluator component ( #7401 )
...
* draft llm evaluator
* docstrings
* flexible inputs; validate inputs and outputs
* add tests
* add release note
* remove example
* docstrings
* make outputs parameter optional. default:
* validate init parameters
* linting
* remove mention of binary scores from template
* make examples and outputs params non-optional
* removed leftover from optional outputs param
* simplify building examples section for template
* validate inputs and outputs in examples are dict with str as key
* fix pylint too-many-boolean-expressions
* increase test coverage
2024-03-25 07:05:27 +01:00