fix: forcing response format to be JSON valid (#7692)

* forcing response format to be JSON valid

* adding release notes

* cleaning up

* Update haystack/components/evaluators/llm_evaluator.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
This commit is contained in:
David S. Batista 2024-05-14 12:22:38 +02:00 committed by GitHub
parent a2be90b95a
commit 75cf35c743
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 9 additions and 1 deletions

View File

@ -87,7 +87,9 @@ class LLMEvaluator:
self.api_key = api_key
if api == "openai":
self.generator = OpenAIGenerator(api_key=api_key)
self.generator = OpenAIGenerator(
api_key=api_key, generation_kwargs={"response_format": {"type": "json_object"}}
)
else:
raise ValueError(f"Unsupported API: {api}")

View File

@ -0,0 +1,6 @@
---
enhancements:
- |
Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output.
This is to ensure that the output is always in a consistent format, regardless of the input.