* Enable Support for Meta LLama-2 Models in Amazon Sagemaker
* Improve unit test for invocation layers positioning
* Small adjustment, add more unit tests
* mypy fixes
* Improve unit tests
* Update test/prompt/invocation_layer/test_sagemaker_meta.py
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
* PR feedback
* Add pydocs for newly extracted methods
* simplify is_proper_chat_*
---------
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
Co-authored-by: anakin87 <stefanofiorucci@gmail.com>
* Update Claude support with the latest models, new streaming API, context window sizes
* Use Github Anthropic SDK link for tokenizer, revert _init_tokenizer
* Change example key name to ANTHROPIC_API_KEY
* replace get_task method and change invocation layer order
* add test for invocation layer order
* add test documentation
* make invocation layer test more robust
* fix type annotation
* change hf timeout
* simplify timeout mock and add get_task exception cause
---------
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
* Create SageMaker base class and two implementation subclasses
---------
Co-authored-by: Stefano Fiorucci <44616784+anakin87@users.noreply.github.com>
* Add support for prompt truncation when using chatgpt if direct prompting is used
* Update tests for test token limit for prompt node
* Update warning message to be correct
* Minor cleanup
* Mark back to integration
* Update count_openai_tokens_messages to reflect changes shown in tiktoken
* Use mocking to avoid request call
* Fix test to make it comply with unit test requirements
* Move tests to respective invocation layers
* Moved fixture to one spot
* Simplify HFLocalInvocationLayer, move/add unit tests
* PR feedback
* Better pipeline invocation, add mocked tests
* Minor improvements
* Mock pipeline directly, unit test updates
* PR feedback, change pytest type to integration
* Mock supports unit test
* add full stop
* PR feedback, improve unit tests
* Add mock_get_task fixture
* Further improve unit tests
* Minor unit test improvement
* Add unit tests, increase coverage
* Add unit tests, increase test coverage
* Small optimization, improve _ensure_token_limit unit test
---------
Co-authored-by: Darja Fokina <daria.f93@gmail.com>
* HFInferenceEndpointInvocationLayer streaming support
* Small fixes
* Add unit test
* PR feedback
* Alphabetically sort params
* Convert PromptNode tests to HFInferenceEndpointInvocationLayer invoke tests
* Rewrite streaming with sseclient
* More PR updates
* Implement and test _ensure_token_limit
* Further optimize DefaultPromptHandler
* Fix CohereInvocationLayer mistypes
* PR feedback
* Break up unit tests, simplify
* Simplify unit tests even further
* PR feedback on unit test simplification
* Proper code identation under patch context manager
* More unit tests, slight adjustments
* Remove unrelated CohereInvocationLayer change
This reverts commit 82337151e8328d982f738e5da9129ff99350ea0c.
* Revert "Further optimize DefaultPromptHandler"
This reverts commit 606a761b6e3333f27df51a304cfbd1906c806e05.
* lg update
mostly full stops at the end of docstrings
---------
Co-authored-by: Silvano Cerza <3314350+silvanocerza@users.noreply.github.com>
Co-authored-by: Silvano Cerza <silvanocerza@gmail.com>
Co-authored-by: Darja Fokina <daria.f93@gmail.com>