Fanli Lin
8d04f28e11
fix: hf agent outputs the prompt text while the openai agent not ( #5461 )
...
* add skil prompt
* fix formatting
* add release note
* add release note
* Update releasenotes/notes/add-skip-prompt-for-hf-model-agent-89aef2838edb907c.yaml
Co-authored-by: Daria Fokina <daria.f93@gmail.com>
* Update haystack/nodes/prompt/invocation_layer/handlers.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Update haystack/nodes/prompt/invocation_layer/handlers.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Update haystack/nodes/prompt/invocation_layer/hugging_face.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* add a unit test
* add a unit test2
* add skil prompt
* Revert "add skil prompt"
This reverts commit b1ba938c94b67a4fd636d321945990aabd2c5b2a.
* add unit test
---------
Co-authored-by: Daria Fokina <daria.f93@gmail.com>
Co-authored-by: bogdankostic <bogdankostic@web.de>
2023-08-02 16:34:33 +02:00
Fanli Lin
73fa796735
fix: enable passing max_length
for text2text-generation task ( #5420 )
...
* bug fix
* add unit test
* reformatting
* add release note
* add release note
* Update releasenotes/notes/enable-set-max-length-during-runtime-097d65e537bf800b.yaml
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Update test/prompt/invocation_layer/test_hugging_face.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Update test/prompt/invocation_layer/test_hugging_face.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Update test/prompt/invocation_layer/test_hugging_face.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Update test/prompt/invocation_layer/test_hugging_face.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* bug fix
---------
Co-authored-by: bogdankostic <bogdankostic@web.de>
2023-08-02 14:13:30 +02:00
Fanli Lin
f7fd5eeb4f
feat: enable loading tokenizer for models that are not supported by the transformers library ( #5314 )
...
* add tokenizer load
* change import order
* move imports
* refactor code
* import lib
* remove pretrainedmodel
* fix linting
* update patch
* fix order
* remove tokenizer class
* use tokenizer class
* no copy
* add case for model is an instance
* fix optional
* add ut
* set default to None
* change models
* Update haystack/nodes/prompt/invocation_layer/hugging_face.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Update haystack/nodes/prompt/invocation_layer/hugging_face.py
Co-authored-by: bogdankostic <bogdankostic@web.de>
* add unit tests
* add unit tests
* remove lib
* formatting
* formatting
* formatting
* add release note
* Update releasenotes/notes/load-tokenizer-if-not-load-by-transformers-5841cdc9ff69bcc2.yaml
Co-authored-by: bogdankostic <bogdankostic@web.de>
---------
Co-authored-by: bogdankostic <bogdankostic@web.de>
2023-08-02 11:42:23 +02:00
Julian Risch
5bb0a1f57a
Revert "fix: num_return_sequences should be less than num_beams, not top_k ( #5280 )" ( #5434 )
...
This reverts commit 514f93a6eb575d376b21d22e32080fac62cf785f.
2023-07-25 13:27:41 +02:00
Fanli Lin
9891bfeddd
fix: a small bug in StopWordsCriteria ( #5316 )
2023-07-13 15:58:06 +02:00
MichelBartels
fd350bbb8f
fix: Run HFLocalInvocationLayer.supports even if inference packages are not installed ( #5308 )
...
---------
Co-authored-by: Vladimir Blagojevic <dovlex@gmail.com>
2023-07-13 12:52:56 +02:00
Fanli Lin
514f93a6eb
fix: num_return_sequences should be less than num_beams, not top_k ( #5280 )
...
* formatting
* remove top_k variable
* add pytest
* add numbers
* string formatting
* fix formatting
* revert
* extend tests with assertions for num_return_sequences
---------
Co-authored-by: Julian Risch <julian.risch@deepset.ai>
2023-07-11 12:20:21 +02:00
MichelBartels
08f1865ddd
fix: Improve robustness of get_task HF pipeline invocations ( #5284 )
...
* replace get_task method and change invocation layer order
* add test for invocation layer order
* add test documentation
* make invocation layer test more robust
* fix type annotation
* change hf timeout
* simplify timeout mock and add get_task exception cause
---------
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
2023-07-06 16:33:44 +02:00
ZanSara
65cdf36d72
chore: block all HTTP requests in CI ( #5088 )
2023-06-13 14:52:24 +02:00
Vladimir Blagojevic
e3b069620b
feat: pass model parameters to HFLocalInvocationLayer via model_kwargs
, enabling direct model usage ( #4956 )
...
* Simplify HFLocalInvocationLayer, move/add unit tests
* PR feedback
* Better pipeline invocation, add mocked tests
* Minor improvements
* Mock pipeline directly, unit test updates
* PR feedback, change pytest type to integration
* Mock supports unit test
* add full stop
* PR feedback, improve unit tests
* Add mock_get_task fixture
* Further improve unit tests
* Minor unit test improvement
* Add unit tests, increase coverage
* Add unit tests, increase test coverage
* Small optimization, improve _ensure_token_limit unit test
---------
Co-authored-by: Darja Fokina <daria.f93@gmail.com>
2023-06-07 13:34:45 +02:00