24 Commits

Author SHA1 Message Date
Vladimir Blagojevic
348e7d2dfe
refactor: Separate PromptModelInvocationLayers in providers.py (#4327)
* Refactor PromptNode, separate PromptModelInvocationLayers in providers.py
2023-03-06 16:34:59 +01:00
Sebastian
1a42166978
fix: Prevent going past token limit in OpenAI calls in PromptNode (#4179)
* Refactoring to remove duplicate code when using OpenAI API

* Adding docstrings

* Fix mypy issue

* Moved retry mechanism to openai_request function in openai_utils

* Migrate OpenAI embedding encoder to use the openai_request util function.

* Adding docstrings.

* pylint import errors

* More pylint import errors

* Move construction of headers into openai_request and api_key as input variable.

* Made _openai_text_completion_tokenization_details so can be resued in PromptNode and OpenAIAnswerGenerator

* Add prompt truncation to the PromptNode.

* Removed commented out test.

* Bump version of tiktoken to 0.2.0 so we can use MODEL_TO_ENCODING to automatically determine correct tokenizer for the requested model

* Change one method back to public

* Fixed bug in token length truncation. Included answer length into truncation amount. Moved truncation higher up to PromptNode level.

* Pylint error

* Improved warning message

* Added _ensure_token_limit for HFLocalInvocationLayer. Had to remove max_length from base PromptModelInvocationLayer to ensure that max_length has a default value.

* Adding tests

* Expanded on doc strings

* Updated tests

* Update docstrings

* Update tests, and go back to how USE_TIKTOKEN was used before.

* Update haystack/nodes/prompt/prompt_node.py

Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>

* Update haystack/nodes/prompt/prompt_node.py

Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>

* Update haystack/nodes/prompt/prompt_node.py

Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>

* Update haystack/nodes/retriever/_openai_encoder.py

Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>

* Update haystack/utils/openai_utils.py

Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>

* Update haystack/utils/openai_utils.py

Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>

* Updated docstrings, and added integration marks

* Remove comment

* Update test

* Fix test

* Update test

* Updated openai_request function to work with the azure api

* Fixed error in _openai_encodery.py

---------

Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>
Co-authored-by: Vladimir Blagojevic <dovlex@gmail.com>
2023-03-03 13:49:21 +01:00
Vladimir Blagojevic
79bf25aaea
feat: Add Azure as OpenAI endpoint (#4170)
* Add Azure as OpenAI endpoint
---------

Co-authored-by: Sebastian Lee <sebastian.lee@deepset.ai>
2023-03-02 09:55:09 +01:00
Massimiliano Pippi
c3a38a59c0
Update test_prompt_node.py (#4281) 2023-02-28 09:37:40 +01:00
Massimiliano Pippi
4b8d195288
refact: mark unit tests under the test/nodes/** path (#4235)
* document merger

* mark unit tests

* revert
2023-02-27 15:00:19 +01:00
Sebastian
efe46b1214
Fix: Allow torch_dtype="auto" in PromptNode (#4166)
* Fix for allowing torch_dtype="auto"

* Fix to logic of torch_dtype detection

* separate test for dtype
2023-02-27 09:59:27 +01:00
Massimiliano Pippi
262c9771f4
relax test assertion (#4229) 2023-02-22 12:37:09 +01:00
tstadel
14578aa54f
feat: add top_k to PromptNode (#4159)
* add top_k to PromptNode

* fix OpenAI

* fix openai test
2023-02-20 14:51:45 +01:00
Sebastian
d129598203
Prompt node/run batch (#4072)
* Starting to implement first pass at run_batch

* Started to add _flatten_input function

* First pass at run_batch method.

* Fixed bug

* Adding tests for run_batch

* Update doc strings

* Pylint and mypy

* Pylint

* Fixing mypy

* Restructurig of run_batch tests

* Add minor lg updates

* Adding more tests

* Update dev comments and call static method differently

* Fixed the setting of output variable

* Set output_variable in __init__ of PromptNode

* Make a one-liner

---------

Co-authored-by: agnieszka-m <amarzec13@gmail.com>
2023-02-20 11:58:13 +01:00
Sebastian
9a26942952
feat: Add model_kwargs option to PromptNode (#4151)
* Add input option to PromptNode to allow the passing of default kwargs

* Add yaml test for model_kwargs parameter
2023-02-15 18:46:26 +01:00
Vladimir Blagojevic
d839b9314f
Update PromptTemplate tests (#4131) 2023-02-10 15:24:01 +01:00
Sebastian
01d39df863
feat: Update allowed models to be used with Prompt Node (#4018)
* Update allowed models to be used with Prompt Node

* Added try except block around the config to skip over OpenAI models.

* Fixing tests

* Adding warning message

* Adding test for different HF models that could be used in prompt node
2023-02-08 12:47:52 +01:00
Stefano Fiorucci
5c009c2a1a
feat: OpenAI - warn users if max_tokens is too short (#4094)
* warn users if max_tokens is too short

* skip test if not API KEY

* add counters

* correctly run precommit
2023-02-08 10:39:40 +01:00
Vladimir Blagojevic
3273a2714d
fix: Add PromptTemplate __repr__ method (#4058)
Co-authored-by: ZanSara <sarazanzo94@gmail.com>
2023-02-07 08:14:32 +01:00
Zoltan Fedor
2b1849f525
fix: Add a verbose option to PromptNode to let users understand the prompts being used #2 (#3898)
* fix: Add a verbose option to PromptNode to let users understand the prompts being used #2

* Add comments and refactoring todo note

* Fix logging-fstring-interpolation pylint

* Update haystack/nodes/prompt/prompt_node.py

Co-authored-by: Massimiliano Pippi <mpippi@gmail.com>

---------

Co-authored-by: Vladimir Blagojevic <dovlex@gmail.com>
Co-authored-by: Massimiliano Pippi <mpippi@gmail.com>
2023-01-31 09:33:47 +01:00
Vladimir Blagojevic
ec85207cf7
Remove __eq__ and __hash__ from PromptNode (#3923) 2023-01-26 13:38:35 +01:00
Vladimir Blagojevic
b945eaeabd
PromptNode: expose output_variable, adjust unit tests (#3892) 2023-01-26 11:01:11 +01:00
ZanSara
0e471d5e5a
fix: change model in distillation test (#3944)
* change model

* change layer count

* move promptnode tests in integration

* fix marker
2023-01-25 23:32:11 +05:30
Vladimir Blagojevic
4d8b1d0b22
refactor: Improve stop_words handling, add unit test cases (#3918)
* Improve stop_words handling, add unit test cases

* Update test/nodes/test_prompt_node.py

Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>

Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>
2023-01-24 12:52:41 +01:00
Vladimir Blagojevic
4c28253955
feat: PromptNode - implement stop words (#3884) 2023-01-19 12:26:15 +01:00
Vladimir Blagojevic
e2fb82b148
refactor: Move invocation_context from meta to own pipeline variable (#3888) 2023-01-19 11:17:06 +01:00
Zoltan Fedor
0288e1be76
bug: The PromptNode handles all parameters as lists without checking if they are in fact lists (#3820) 2023-01-10 08:08:17 +01:00
Vladimir Blagojevic
bebd6b26ec
Improve robustness of PromptNode unit tests (#3747) 2023-01-02 16:28:56 +01:00
Vladimir Blagojevic
9ebf164cfd
feat: Expand LLM support with PromptModel, PromptNode, and PromptTemplate (#3667)
Co-authored-by: ZanSara <sarazanzo94@gmail.com>
2022-12-20 11:21:26 +01:00