[Feat] Accommodate PaddleX 3.0.2 changes (#15745)

* Fix default run_mode

* Bump paddlex version

* Fix typo
This commit is contained in:
Lin Manhui 2025-06-17 21:29:18 +08:00 committed by GitHub
parent 118e09ac2d
commit bbfa08b25b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 25 additions and 11 deletions

View File

@ -20,7 +20,7 @@ paddleocr install_hpi_deps {device_type}
The supported device types are:
- `cpu`: For CPU-only inference. Currently supports Linux systems, x86-64 architecture processors, and Python 3.8-3.12.
- `gpu`: For inference using either CPU or NVIDIA GPU. Currently supports Linux systems, x86-64 architecture processors, and Python 3.8-3.12. Refer to the next subsection for detailed instructions.
- `gpu`: For inference using either CPU or NVIDIA GPU. Currently supports Linux systems, x86-64 architecture processors, and Python 3.8-3.12. If you want to use the full high-performance inference capabilities, you also need to ensure that a compatible version of TensorRT is installed in your environment. Refer to the next subsection for detailed instructions.
Only one type of device dependency should exist in the same environment. For Windows systems, it is currently recommended to install within a Docker container or [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) environment.
@ -48,7 +48,7 @@ pip list | grep nvidia-cuda
pip list | grep nvidia-cudnn
```
Secondly, ensure that the environment has the required TensorRT version installed. Currently, PaddleOCR only supports TensorRT 8.6.1.6. If using the official PaddlePaddle image, you can install the TensorRT wheel package with the following command:
Secondly, it is recommended to ensure that a compatible version of TensorRT is installed in the environment; otherwise, the Paddle Inference TensorRT subgraph engine will be unavailable, and the program may not achieve optimal inference performance. Currently, PaddleOCR only supports TensorRT 8.6.1.6. If using the official PaddlePaddle image, you can install the TensorRT wheel package with the following command:
```bash
python -m pip install /usr/local/TensorRT-*/python/tensorrt-*-cp310-none-linux_x86_64.whl

View File

@ -20,7 +20,7 @@ paddleocr install_hpi_deps {设备类型}
支持的设备类型包括:
- `cpu`:仅使用 CPU 推理。目前支持 Linux 系统、x86-64 架构处理器、Python 3.8-3.12。
- `gpu`:使用 CPU 或 NVIDIA GPU 推理。目前支持 Linux 系统、x86-64 架构处理器、Python 3.8-3.12。请查看下一小节的详细说明。
- `gpu`:使用 CPU 或 NVIDIA GPU 推理。目前支持 Linux 系统、x86-64 架构处理器、Python 3.8-3.12。如果希望使用完整的高性能推理功能,还需要确保环境中安装有符合要求的 TensorRT。请查看下一小节的详细说明。
同一环境中只应该存在一种设备类型的依赖。对于 Windows 系统,目前建议在 Docker 容器或者 [WSL](https://learn.microsoft.com/zh-cn/windows/wsl/install) 环境中安装。
@ -48,7 +48,7 @@ pip list | grep nvidia-cuda
pip list | grep nvidia-cudnn
```
其次,需确保环境中安装有符合要求的 TensorRT。目前 PaddleOCR 仅支持 TensorRT 8.6.1.6。如果使用飞桨官方镜像,可执行如下命令安装 TensorRT wheel 包:
其次,建议确保环境中安装有符合要求的 TensorRT否则 Paddle Inference TensorRT 子图引擎将不可用,程序可能无法取得最佳推理性能。目前 PaddleOCR 仅支持 TensorRT 8.6.1.6。如果使用飞桨官方镜像,可执行如下命令安装 TensorRT wheel 包:
```bash
python -m pip install /usr/local/TensorRT-*/python/tensorrt-*-cp310-none-linux_x86_64.whl

View File

@ -109,7 +109,7 @@ plugins:
PP-ChatOCRv4简介: PP-ChatOCRv4 Introduction
推理部署: Model Deploy
高性能推理: High-Performance Inference
获取onnx模型: Obtaining ONNX Models
获取ONNX模型: Obtaining ONNX Models
端侧部署: On-Device Deployment
服务化部署: Serving Deployment
模块列表: Module List
@ -275,7 +275,7 @@ nav:
- PP-ChatOCRv4简介: version3.x/algorithm/PP-ChatOCRv4/PP-ChatOCRv4.md
- 推理部署:
- 高性能推理: version3.x/deployment/high_performance_inference.md
- 获取onnx模型: version3.x/deployment/obtaining_onnx_models.md
- 获取ONNX模型: version3.x/deployment/obtaining_onnx_models.md
- 端侧部署: version3.x/deployment/on_device_deployment.md
- 服务化部署: version3.x/deployment/serving.md
- 基于Python或C++预测引擎推理: version3.x/deployment/python_and_cpp_infer.md

View File

@ -59,12 +59,22 @@ def prepare_common_init_args(model_name, common_args):
device = common_args["device"]
if device is None:
device = get_default_device()
device_type, _ = parse_device(device)
device_type, device_ids = parse_device(device)
if device_ids is not None:
device_id = device_ids[0]
else:
device_id = None
init_kwargs = {"device": device}
init_kwargs = {}
init_kwargs["use_hpip"] = common_args["enable_hpi"]
init_kwargs["hpi_config"] = {
"device_type": device_type,
"device_id": device_id,
}
pp_option = PaddlePredictorOption(model_name)
pp_option = PaddlePredictorOption(
model_name, device_type=device_type, device_id=device_id
)
if device_type == "gpu":
if common_args["use_pptrt"]:
if common_args["pptrt_precision"] == "fp32":
@ -74,13 +84,17 @@ def prepare_common_init_args(model_name, common_args):
"pptrt_precision"
]
pp_option.run_mode = "trt_fp16"
else:
pp_option.run_mode = "paddle"
elif device_type == "cpu":
enable_mkldnn = common_args["enable_mkldnn"]
if enable_mkldnn:
pp_option.mkldnn_cache_capacity = common_args["mkldnn_cache_capacity"]
else:
pp_option.run_mode = "paddle"
pp_option.cpu_threads = common_args["cpu_threads"]
pp_option.cpu_threads = common_args["cpu_threads"]
else:
pp_option.run_mode = "paddle"
init_kwargs["pp_option"] = pp_option
return init_kwargs

View File

@ -39,7 +39,7 @@ classifiers = [
"Topic :: Utilities",
]
dependencies = [
"paddlex[ocr,ie,multimodal]==3.0.1",
"paddlex[ocr,ie,multimodal]==3.0.2",
"PyYAML>=6",
"typing-extensions>=4.12",
]