mirror of
https://github.com/PaddlePaddle/PaddleOCR.git
synced 2025-12-29 07:58:41 +00:00
parent
42aa8a7677
commit
9e7021353f
@ -2336,9 +2336,9 @@ If you need to apply the pipeline directly in your Python project, you can refer
|
||||
|
||||
Additionally, PaddleX provides two other deployment methods, detailed as follows:
|
||||
|
||||
🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides a high-performance inference plugin aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed instructions on high-performance inference, please refer to the [PaddleX High-Performance Inference Guide](../../../pipeline_deploy/high_performance_inference.md).
|
||||
🚀 **High-Performance Inference**: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides a high-performance inference plugin aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed instructions on high-performance inference, please refer to the [High-Performance Inference Guide](../deployment/high_performance_inference.en.md).
|
||||
|
||||
☁️ **Serving**: Serving is a common deployment form in actual production environments. By encapsulating the inference functionality as a service, clients can access these services through network requests to obtain inference results. PaddleX supports multiple serving solutions for pipelines. For detailed instructions on serving, please refer to the [PaddleX Serving Guide](../../../pipeline_deploy/serving.md).
|
||||
☁️ **Serving**: Serving is a common deployment form in actual production environments. By encapsulating the inference functionality as a service, clients can access these services through network requests to obtain inference results. PaddleX supports multiple serving solutions for pipelines. For detailed instructions on serving, please refer to the [Service Deployment Guide](../deployment/serving.en.md).
|
||||
|
||||
Below are the API references for basic serving and multi-language service invocation examples:
|
||||
|
||||
@ -2993,10 +2993,6 @@ print(result_chat["chatResult"])
|
||||
</details>
|
||||
<br/>
|
||||
|
||||
📱 **Edge Deployment**: Edge deployment is a method where computing and data processing functions are placed on the user's device itself. The device can directly process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed instructions on edge deployment, please refer to the [PaddleX Edge Deployment Guide](../../../pipeline_deploy/edge_deploy.md).
|
||||
You can choose an appropriate deployment method for your pipeline based on your needs and proceed with subsequent AI application integration.
|
||||
|
||||
|
||||
## 4. Custom Development
|
||||
If the default model weights provided by the PP-ChatOCRv4 pipeline do not meet your requirements in terms of accuracy or speed, you can try to fine-tune the existing model using your own domain-specific or application-specific data to improve the recognition performance of the PP-ChatOCRv4 pipeline in your scenario.
|
||||
|
||||
|
||||
@ -863,20 +863,21 @@ Before using the PP-StructureV3 pipeline locally, please ensure that you have co
|
||||
|
||||
### 2.1 Experiencing via Command Line
|
||||
|
||||
You can quickly experience the PP-StructureV3 pipeline with a single command. Use the [test file](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png) and replace `--input` with the local path to perform prediction.
|
||||
You can quickly experience the PP-StructureV3 pipeline with a single command.
|
||||
|
||||
```
|
||||
paddlex --pipeline PP-StructureV3 \
|
||||
--input pp_structure_v3_demo.png \
|
||||
--use_doc_orientation_classify False \
|
||||
--use_doc_unwarping False \
|
||||
--use_textline_orientation False \
|
||||
--use_e2e_wireless_table_rec_model True \
|
||||
--save_path ./output \
|
||||
--device gpu:0
|
||||
```bash
|
||||
paddleocr pp_structurev3 -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png
|
||||
|
||||
paddleocr pp_structurev3 -i ./pp_structure_v3_demo.png --use_doc_orientation_classify True
|
||||
|
||||
paddleocr pp_structurev3 -i ./pp_structure_v3_demo.png --use_doc_unwarping True
|
||||
|
||||
paddleocr pp_structurev3 -i ./pp_structure_v3_demo.png --use_textline_orientation False
|
||||
|
||||
paddleocr pp_structurev3 -i ./pp_structure_v3_demo.png --device gpu
|
||||
```
|
||||
|
||||
The parameter description can be found in [2.2.2 Python Script Integration](#222-python-script-integration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices).
|
||||
The parameter description can be found in [2.2 Python Script Integration](#22PythonScriptIntegration). Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to [Pipeline Parallel Inference](https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/instructions/parallel_inference.html).
|
||||
|
||||
After running, the result will be printed to the terminal, as follows:
|
||||
|
||||
|
||||
@ -384,7 +384,7 @@ comments: true
|
||||
|
||||
## 2. 快速开始
|
||||
|
||||
在本地使用公式识别产线前,请确保您已经按照[安装教程](../ppocr/installation.md)完成了wheel包安装。安装完成后,可以在本地使用命令行体验或 Python 集成。
|
||||
在本地使用公式识别产线前,请确保您已经按照[安装教程](../installation.md)完成了wheel包安装。安装完成后,可以在本地使用命令行体验或 Python 集成。
|
||||
|
||||
### 2.1 命令行方式体验
|
||||
|
||||
|
||||
@ -1019,7 +1019,7 @@ In the above Python script, the following steps were executed:
|
||||
</tr>
|
||||
<tr>
|
||||
<td><code>device</code></td>
|
||||
<td>The device used for pipeline inference. It supports specifying the specific card number of the GPU, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu". Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to <a href="../../instructions/parallel_inference.en.md#specifying-multiple-inference-devices">Pipeline Parallel Inference</a>.</td>
|
||||
<td>The device used for pipeline inference. It supports specifying the specific card number of the GPU, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu". Supports specifying multiple devices simultaneously for parallel inference. For details, please refer to <a href="https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_usage/instructions/parallel_inference.html">Pipeline Parallel Inference</a>.</td>
|
||||
<td><code>str</code></td>
|
||||
<td><code>gpu:0</code></td>
|
||||
</tr>
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user