update hpi doc (#16134)

* update hpi doc

* update

* update

* update

* update

* update
This commit is contained in:
zhang-prog 2025-07-24 22:51:29 +08:00 committed by GitHub
parent a33cbf393e
commit f1c3b10741
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 26 additions and 10 deletions

View File

@ -26,16 +26,24 @@ Only one type of device dependency should exist in the same environment. For Win
**It is recommended to use the official PaddlePaddle Docker image to install high-performance inference dependencies.** The corresponding images for each device type are as follows:
- `cpu`: `ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle:3.0.0`
- `gpu`:
- CUDA 11.8: `ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle:3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6`
- `cpu``ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/paddlex:paddlex3.0.1-paddlepaddle3.0.0-cpu`
- `gpu`
- CUDA 11.8`ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/paddlex:paddlex3.0.1-paddlepaddle3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6`
- `gpu`
- CUDA 12.6`ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/paddlex:paddlex3.0.1-paddlepaddle3.0.0-gpu-cuda12.6-cudnn9.5-trt10.5`
**Notice:**
- **Currently, high-performance inference with CUDA 12.6 and cuDNN 9.5 only supports OpenVINO and ONNX Runtime backends, and does not yet support the TensorRT backend.**
### 1.2 Detailed GPU Environment Instructions
First, ensure that the environment has the required CUDA and cuDNN versions installed. Currently, PaddleOCR only supports CUDA and cuDNN versions compatible with CUDA 11.8 + cuDNN 8.9. Below are the installation instructions for CUDA 11.8 and cuDNN 8.9:
First, ensure that the environment has the required CUDA and cuDNN versions installed. Currently, PaddleOCR supports CUDA and cuDNN versions compatible with CUDA 11.8 + cuDNN 8.9 or CUDA 12.6 + cuDNN 9.5. Below are the installation instructions for CUDA and cuDNN:
- [Install CUDA 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive)
- [Install cuDNN 8.9](https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-890/install-guide/index.html)
- [Install CUDA 12.6](https://developer.nvidia.com/cuda-12-6-0-download-archive)
- [Install cuDNN 9.5](https://docs.nvidia.com/deeplearning/cudnn/backend/v9.5.0/installation/linux.html)
If using the official PaddlePaddle image, the CUDA and cuDNN versions in the image already meet the requirements, and no additional installation is needed.
@ -48,7 +56,7 @@ pip list | grep nvidia-cuda
pip list | grep nvidia-cudnn
```
Secondly, it is recommended to ensure that a compatible version of TensorRT is installed in the environment; otherwise, the Paddle Inference TensorRT subgraph engine will be unavailable, and the program may not achieve optimal inference performance. Currently, PaddleOCR only supports TensorRT 8.6.1.6. If using the official PaddlePaddle image, you can install the TensorRT wheel package with the following command:
Secondly, it is recommended to ensure that a compatible version of TensorRT is installed in the environment; otherwise, the Paddle Inference TensorRT subgraph engine will be unavailable, and the program may not achieve optimal inference performance. Currently, PaddleOCR only supports TensorRT 8.6.1.6 in the CUDA 11.8 environment. If using the official PaddlePaddle image, you can install the TensorRT wheel package with the following command:
```bash
python -m pip install /usr/local/TensorRT-*/python/tensorrt-*-cp310-none-linux_x86_64.whl

View File

@ -24,18 +24,26 @@ paddleocr install_hpi_deps {设备类型}
同一环境中只应该存在一种设备类型的依赖。对于 Windows 系统,目前建议在 Docker 容器或者 [WSL](https://learn.microsoft.com/zh-cn/windows/wsl/install) 环境中安装。
**推荐使用飞桨官方 Docker 镜像安装高性能推理依赖。** 各设备类型对应的镜像如下:
**推荐使用 PaddleX 官方 Docker 镜像安装高性能推理依赖。** 各设备类型对应的镜像如下:
- `cpu``ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle:3.0.0`
- `cpu``ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/paddlex:paddlex3.0.1-paddlepaddle3.0.0-cpu`
- `gpu`
- CUDA 11.8`ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddle:3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6`
- CUDA 11.8`ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/paddlex:paddlex3.0.1-paddlepaddle3.0.0-gpu-cuda11.8-cudnn8.9-trt8.6`
- `gpu`
- CUDA 12.6`ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlex/paddlex:paddlex3.0.1-paddlepaddle3.0.0-gpu-cuda12.6-cudnn9.5-trt10.5`
**注意:**
- **目前 CUDA 12.6 + cuDNN 9.5 的高性能推理仅支持 OpenVINO 和 ONNX Runtime 后端,暂不支持 TensorRT 后端。**
## 1.2 GPU 环境详细说明
首先,需要确保环境中安装有符合要求的 CUDA 与 cuDNN。目前 PaddleOCR 仅支持与 CUDA 11.8 + cuDNN 8.9 兼容的 CUDA 和 cuDNN版本。以下分别是 CUDA 11.8 和 cuDNN 8.9 的安装说明文档:
首先,需要确保环境中安装有符合要求的 CUDA 与 cuDNN。目前 PaddleOCR 支持与 CUDA 11.8 + cuDNN 8.9 或 CUDA 12.6 + cuDNN 9.5 兼容的 CUDA 和 cuDNN 版本。以下分别是 CUDA 和 cuDNN 的安装说明文档:
- [安装 CUDA 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive)
- [安装 cuDNN 8.9](https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-890/install-guide/index.html)
- [安装 CUDA 12.6](https://developer.nvidia.com/cuda-12-6-0-download-archive)
- [安装 cuDNN 9.5](https://docs.nvidia.com/deeplearning/cudnn/backend/v9.5.0/installation/linux.html)
如果使用飞桨官方镜像,则镜像中的 CUDA 和 cuDNN 版本已经是满足要求的,无需额外安装。
@ -48,7 +56,7 @@ pip list | grep nvidia-cuda
pip list | grep nvidia-cudnn
```
其次,建议确保环境中安装有符合要求的 TensorRT否则 Paddle Inference TensorRT 子图引擎将不可用,程序可能无法取得最佳推理性能。目前 PaddleOCR 仅支持 TensorRT 8.6.1.6。如果使用飞桨官方镜像,可执行如下命令安装 TensorRT wheel 包:
其次,建议确保环境中安装有符合要求的 TensorRT否则 Paddle Inference TensorRT 子图引擎将不可用,程序可能无法取得最佳推理性能。**目前 PaddleOCR 仅支持在 CUDA 11.8 环境使用 TensorRT 8.6.1.6**。如果使用飞桨官方镜像,可执行如下命令安装 TensorRT wheel 包:
```bash
python -m pip install /usr/local/TensorRT-*/python/tensorrt-*-cp310-none-linux_x86_64.whl