Remove stray requirements.txt files and update README.md (#2075)

* Remove stray requirements.txt files and update README.md

* Remove requirement files

* Add details about pip bug and link to setup.cfg

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This commit is contained in:
Sara Zan 2022-01-27 11:22:14 +01:00 committed by GitHub
parent 488c3e9e52
commit 9af1292cda
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 20 additions and 95 deletions

View File

@ -93,6 +93,26 @@ You can also clone it from GitHub — in case you'd like to work with the master
To update your installation, do a ``git pull``. The ``--editable`` flag will update changes immediately.
Note that this command will install the **base** version of the package, which includes only the
Elasticsearch document store and the most commonly used components.
For a complete installation that includes all optional components, please run instead:
```
git clone https://github.com/deepset-ai/haystack.git
cd haystack
pip install --upgrade pip
pip install --editable .[all] # or 'all-gpu' to get the GPU-enabled dependencies
```
Do not forget to upgrade pip before performing the installation: pip version below 21.3.1 might
enter infinite loops due to a bug. If you encounter such loop, either upgrade pip or replace
`[all]` with `[docstores,crawler,preprocessing,ocr,ray,rest,ui,dev,onnx]`.
For an complete list of the dependency groups available, have a look at the
[setup.cfg file](https://github.com/deepset-ai/haystack/blob/488c3e9e52b9286afc3ad9a5f2e3161772be2e2f/setup.cfg#L103).
**3. Installing on Windows**
On Windows, you might need:

View File

@ -170,27 +170,6 @@ These are used to condition the generator as it generates the answer.
What it should return then are novel text spans that form and answer to your question!
```python
# Now generate an answer for each question
for question in QUESTIONS:
# Retrieve related documents from retriever
retriever_results = retriever.retrieve(
query=question
)
# Now generate answer from question and retrieved documents
predicted_result = generator.predict(
query=question,
documents=retriever_results,
top_k=1
)
# Print you answer
answers = predicted_result["answers"]
print(f'Generated answer is \'{answers[0].answer}\' for the question = \'{question}\'')
```
```python
# Or alternatively use the Pipeline class
from haystack.pipelines import GenerativeQAPipeline

View File

@ -1,8 +0,0 @@
# Add extra dependencies only required for tests and local dev setup
mypy
pytest
selenium
webdriver-manager
beautifulsoup4
markdown
responses

View File

@ -1,66 +0,0 @@
# basics
setuptools
wheel
# PyTorch
# Temp. disabled the next line as it gets currently resolved to https://download.pytorch.org/whl/rocm3.8/torch-1.7.1%2Brocm3.8-cp38-cp38-linux_x86_64.whl
# --find-links=https://download.pytorch.org/whl/torch_stable.html
torch>1.9,<1.11
# progress bars in model download and training scripts
tqdm
# Used for downloading models over HTTP
requests
# Scipy & sklearn for stats in run_classifier
scipy>=1.3.2
scikit-learn>=1.0.0
# Metrics or logging related
seqeval
mlflow<=1.13.1
# huggingface repository
transformers==4.13.0
# pickle extension for (de-)serialization
dill
# Inference with ONNX models. Install onnxruntime-gpu for Inference on GPUs
# onnxruntime
# onnxruntime_tools
psutil
# haystack
fastapi
uvicorn
gunicorn
pandas
psycopg2-binary; sys_platform != 'win32' and sys_platform != 'cygwin'
elasticsearch>=7.7,<=7.10
elastic-apm
tox
coverage
langdetect # for PDF conversions
# for PDF conversions using OCR
pytesseract==0.3.7
pillow==9.0.0
pdf2image==1.14.0
sentence-transformers>=0.4.0
python-multipart
python-docx
sqlalchemy>=1.4.2
sqlalchemy_utils
# for using FAISS with GPUs, install faiss-gpu
faiss-cpu>=1.6.3
tika
uvloop==0.14; sys_platform != 'win32' and sys_platform != 'cygwin'
httptools
nltk
more_itertools
networkx
# Refer milvus version support matrix at https://github.com/milvus-io/pymilvus#install-pymilvus
# For milvus 2.x version use this library `pymilvus===2.0.0rc6`
pymilvus<2.0.0
# Optional: For crawling
#selenium
#webdriver-manager
SPARQLWrapper
mmh3
weaviate-client==2.5.0
ray>=1.9.1
dataclasses-json
quantulum3
azure-ai-formrecognizer==3.2.0b2