* Add OpenAIError to retry mechanism. Use env variable for timeout for OpenAI request in PromptNode.
* Updated retry in OpenAI embedding encoder as well.
* Empty commit
* Add IVF and Product Quantization support for OpenSearchDocumentStore
* Remove unused import statement
* Fix mypy
* Adapt doc strings and error messages to account for PQ
* Adapt validation of indices
* Adapt existing tests
* Fix pylint
* Add tests
* Update lg
* Adapt based on PR review comments
* Fix Pylint
* Adapt based on PR review
* Add request_timeout
* Adapt based on PR review
* Adapt based on PR review
* Adapt tests
* Pin tenacity
* Unpin tenacity
* Adapt based on PR comments
* Add match to tests
---------
Co-authored-by: agnieszka-m <amarzec13@gmail.com>
* removing old dataset telemetry events
* changing function name
* adding the datasets back for old tutorials
* fixing mini bug
* resolving cometns
* quick bug fix
* re-adding docstrings
* removing unnecessay import
* re-adding the telemetry event call for datasets
---------
Co-authored-by: Massimiliano Pippi <mpippi@gmail.com>
* add e2e tests
* move tests to their own module
* add e2e workflow
* pylint
* remove from job
* fix index field name
* skip test on sql
* removed unused code
* fix embedding tests
* adjust test for pinecone
* adjust assertions to the new documents
* bad copypasta
* test
* fix tests
* fix tests
* fix test
* fix tests
* pylint
* update milvus version
* remove debug
* move graphdb tests under e2e
* added instruction_prompt and update defaults
* Change back max_tokens
* Code formatting
* Starting to update instruction_prompt to be a PromptTemplate
* Using PromptTemplate in OpenAIAnswerGenerator
* Removed hardcoded value
* pylint and make examples and examples_context optional prompt parameters
* Added new test for when prompt length goes past max token limit
* Improve doc strings.
* Make "text-davinci-003" the new default model
* Renaming variable to prompt_template and name to question-answering-with-examples
* Reduced repetitive code.
* Added some comments to explain key logic for future debuggers
* Update docs for max_tokens and increase defaul
* Updating variable name to prompt_template and docs.
* Updated test and handled Answer case where no documents are used.
* Slight update to docs.
* Adding more doc strings
* lg updates
* Blackify
---------
Co-authored-by: Malte Pietsch <malte.pietsch@deepset.ai>
Co-authored-by: agnieszka-m <amarzec13@gmail.com>
* Deduplicate same Documents in one MultiLabel
* Add tests
* Update label
* Update label
* Update test
* Update test
* Revert change to check CI
* Revert reversion
* Use deepcopy
* Update tests
* Add workflow to label PRs that edit docstrings
* Add python-version arg in setup-python steps
* Run workflow only in haystack and rest_api python files edit
* Fix labeling job
* Fix labeling conditional
* Fix files globbing in docstrings_checksum.py
* Fix typing
* Rework workflow to use a single job
* fix: update kwargs for TriAdaptiveModel
* fix: squeeze batch for TTR inference
* test: add test for ttr + dataframe case
* test: update and reorganise ttr tests
* refactor: make triadaptive model handle shapes
* refactor: remove duplicate reshaping
* refactor: rename test with duplicate name
* fix: add device assignment back to TTR
* fix: remove duplicated vars in test
---------
Co-authored-by: bogdankostic <bogdankostic@web.de>
* Removed double batching around embed_queries
* Add back tests for retrieve_batch for dpr and embedding retrievers
* Updated table-text-retriever to not double batch
* Fixing pylint
* Update to test
* Remove code breaking test
* Updating dev comment to be clearer
* Update allowed models to be used with Prompt Node
* Added try except block around the config to skip over OpenAI models.
* Fixing tests
* Adding warning message
* Adding test for different HF models that could be used in prompt node