mirror of
https://github.com/PaddlePaddle/PaddleOCR.git
synced 2025-06-26 21:24:27 +00:00
update on-device deployment hyperlink (#15718)
This commit is contained in:
parent
fdf9dc25be
commit
8e7994992b
@ -51,7 +51,7 @@ PaddleOCR is being oversight by a [PMC](https://github.com/PaddlePaddle/PaddleOC
|
||||
- PaddleX, an All-in-One development tool based on PaddleOCR's advanced technology, supports low-code full-process development capabilities in the OCR field:
|
||||
- 🎨 [**Rich Model One-Click Call**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/quick_start.html): Integrates **17 models** related to text image intelligent analysis, general OCR, general layout parsing, table recognition, formula recognition, and seal recognition into 6 pipelines, which can be quickly experienced through a simple **Python API one-click call**. In addition, the same set of APIs also supports a total of **200+ models** in image classification, object detection, image segmentation, and time series forecasting, forming 20+ single-function modules, making it convenient for developers to use **model combinations**.
|
||||
|
||||
- 🚀 [**High Efficiency and Low barrier of entry**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/overview.html): Provides two methods based on **unified commands** and **GUI** to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as **high-performance inference, service-oriented deployment, and edge deployment**. Additionally, for various mainstream hardware such as **NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU**, models can be developed with **seamless switching**.
|
||||
- 🚀 [**High Efficiency and Low barrier of entry**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/overview.html): Provides two methods based on **unified commands** and **GUI** to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as **high-performance inference, service-oriented deployment, and on-device deployment**. Additionally, for various mainstream hardware such as **NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU**, models can be developed with **seamless switching**.
|
||||
|
||||
- Supports [PP-ChatOCRv3-doc](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md), [high-precision layout detection model based on RT-DETR](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/layout_detection_en.md) and [high-efficiency layout area detection model based on PicoDet](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/layout_detection_en.md), [high-precision table structure recognition model](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md), text image unwarping model [UVDoc](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/text_image_unwarping_en.md), formula recognition model [LatexOCR](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/formula_recognition_en.md), and [document image orientation classification model based on PP-LCNet](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md).
|
||||
|
||||
|
@ -44,7 +44,7 @@ hide:
|
||||
- **Rebuilt Deployment Capabilities with Unified Inference Interface:**
|
||||
- Integrates PaddleX3.0's core features for a comprehensive upgrade of the inference and deployment modules.
|
||||
- Optimizes the design from version 2.x and unifies the Python API and CLI.
|
||||
- Supports high-performance inference, serving, and edge deployment scenarios.
|
||||
- Supports high-performance inference, serving, and on-device deployment scenarios.
|
||||
|
||||
- **Optimized Training with PaddlePaddle Framework 3.0:**
|
||||
- Compatible with the latest features such as the CINN compiler.
|
||||
@ -76,7 +76,7 @@ hide:
|
||||
* PaddleX, an All-in-One development tool based on PaddleOCR's advanced technology, supports low-code full-process development capabilities in the OCR field:
|
||||
* 🎨 [**Rich Model One-Click Call**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/quick_start.html): Integrates **17 models** related to text image intelligent analysis, general OCR, general layout parsing, table recognition, formula recognition, and seal recognition into 6 pipelines, which can be quickly experienced through a simple **Python API one-click call**. In addition, the same set of APIs also supports a total of **200+ models** in image classification, object detection, image segmentation, and time series forecasting, forming 20+ single-function modules, making it convenient for developers to use **model combinations**.
|
||||
|
||||
* 🚀 [**High Efficiency and Low barrier of entry**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/overview.html): Provides two methods based on **unified commands** and **GUI** to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as **high-performance inference, service-oriented deployment, and edge deployment**. Additionally, for various mainstream hardware such as **NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU**, models can be developed with **seamless switching**.
|
||||
* 🚀 [**High Efficiency and Low barrier of entry**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/overview.html): Provides two methods based on **unified commands** and **GUI** to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as **high-performance inference, service-oriented deployment, and on-device deployment**. Additionally, for various mainstream hardware such as **NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU**, models can be developed with **seamless switching**.
|
||||
|
||||
* Supports [PP-ChatOCRv3-doc](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_usage/tutorials/information_extraction_pipelines/document_scene_information_extraction_en.md), [high-precision layout detection model based on RT-DETR](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/layout_detection_en.md) and [high-efficiency layout area detection model based on PicoDet](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/layout_detection_en.md), [high-precision table structure recognition model](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/table_structure_recognition_en.md), text image unwarping model [UVDoc](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/text_image_unwarping_en.md), formula recognition model [LatexOCR](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/formula_recognition_en.md), and [document image orientation classification model based on PP-LCNet](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/module_usage/tutorials/ocr_modules/doc_img_orientation_classification_en.md).
|
||||
|
||||
|
@ -17,7 +17,7 @@ Given this background, we’ve decided to implement a major, non-backward-compat
|
||||
The 3.x upgrade consists of three major enhancements:
|
||||
|
||||
1. **New Model Pipelines**: Introduced several new pipelines such as PP-OCRv5, PP-StructureV3, and PP-ChatOCR v4, covering a wide range of base models. These significantly enhance recognition capabilities for various text types, including handwriting, to meet the growing demand for high-precision parsing in complex documents. All models are ready-to-use out of the box, improving development efficiency.
|
||||
2. **Refactored Deployment and Unified Inference Interface**: The deployment module in PaddleOCR 3.x is rebuilt using [PaddleX](../version3.x/paddleocr_and_paddlex.en.md)’s underlying capabilities, fixing design flaws from 2.x and unifying both Python APIs and CLI interfaces. The deployment now supports three main scenarios: high-performance inference, service-oriented deployment, and edge deployment.
|
||||
2. **Refactored Deployment and Unified Inference Interface**: The deployment module in PaddleOCR 3.x is rebuilt using [PaddleX](../version3.x/paddleocr_and_paddlex.en.md)’s underlying capabilities, fixing design flaws from 2.x and unifying both Python APIs and CLI interfaces. The deployment now supports three main scenarios: high-performance inference, service-oriented deployment, and on-device deployment.
|
||||
3. **PaddlePaddle 3.0 Compatibility and Optimized Training**: The new version is fully compatible with PaddlePaddle 3.0, including features like the CINN compiler. It also introduces a standardized model naming system to streamline future updates and maintenance.
|
||||
|
||||
Some legacy features from PaddleOCR 2.x remain partially supported in 3.x. For more information, refer to [Legacy Features](../version2.x/legacy/index.en.md).
|
||||
@ -76,6 +76,6 @@ PaddleOCR 3.0 is still under active development. Current known limitations inclu
|
||||
|
||||
1. Incomplete support for native C++ deployment.
|
||||
2. High-performance service-oriented deployment is not yet on par with PaddleServing in 2.x.
|
||||
3. Edge deployment currently supports only a subset of key models, with broader support pending.
|
||||
3. On-Device Deployment currently supports only a subset of key models, with broader support pending.
|
||||
|
||||
If you encounter any issues during use, feel free to submit feedback via GitHub issues. We also warmly welcome more community members to contribute to PaddleOCR's future. Thank you for your continued support and interest!
|
||||
|
@ -4,12 +4,12 @@ The All-in-One development tool [PaddleX](https://github.com/PaddlePaddle/Paddle
|
||||
|
||||
* 🎨 [**Rich Model One-Click Call**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/quick_start.html): Integrates **48 models** related to text image intelligent analysis, general OCR, general layout parsing, table recognition, formula recognition, and seal recognition into 10 pipelines, which can be quickly experienced through a simple **Python API one-click call**. In addition, the same set of APIs also supports a total of **200+ models** in image classification, object detection, image segmentation, and time series forecasting, forming 30+ single-function modules, making it convenient for developers to use **model combinations**.
|
||||
|
||||
* 🚀 [**High Efficiency and Low barrier of entry**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/overview.html): Provides two methods based on **unified commands** and **GUI** to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as **high-performance inference, service-oriented deployment, and edge deployment**. Additionally, for various mainstream hardware such as **NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU**, models can be developed with **seamless switching**.
|
||||
* 🚀 [**High Efficiency and Low barrier of entry**](https://paddlepaddle.github.io/PaddleOCR/latest/en/paddlex/overview.html): Provides two methods based on **unified commands** and **GUI** to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as **high-performance inference, service-oriented deployment, and on-device deployment**. Additionally, for various mainstream hardware such as **NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU**, models can be developed with **seamless switching**.
|
||||
|
||||
> **Note**: PaddleX is committed to achieving pipeline-level model training, inference, and deployment. A model pipeline refers to a series of predefined development processes for specific AI tasks, including combinations of single models (single-function modules) that can independently complete a type of task.
|
||||
## 2. OCR-Related Capability Support
|
||||
|
||||
In PaddleX, all 6 OCR-related pipelines support **local inference**, and some pipelines support **online experience**. You can quickly experience the pre-trained model effects of each pipeline. If you are satisfied with the pre-trained model effects of a pipeline, you can directly proceed with [high-performance inference](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_deploy/high_performance_inference_en.md)/[service-oriented deployment](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_deploy/service_deploy_en.md)/[edge deployment](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_deploy/edge_deploy_en.md). If not satisfied, you can also use the **custom development** capabilities of the pipeline to improve the effects. For the complete pipeline development process, please refer to [PaddleX Pipeline Usage Overview](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_usage/pipeline_develop_guide_en.md) or the tutorials for each pipeline.
|
||||
In PaddleX, all 6 OCR-related pipelines support **local inference**, and some pipelines support **online experience**. You can quickly experience the pre-trained model effects of each pipeline. If you are satisfied with the pre-trained model effects of a pipeline, you can directly proceed with [high-performance inference](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_deploy/high_performance_inference_en.md)/[service-oriented deployment](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_deploy/service_deploy_en.md)/[on-device deployment](https://paddlepaddle.github.io/PaddleX/latest/en/pipeline_deploy/on_device_deployment.html). If not satisfied, you can also use the **custom development** capabilities of the pipeline to improve the effects. For the complete pipeline development process, please refer to [PaddleX Pipeline Usage Overview](https://github.com/PaddlePaddle/PaddleX/blob/release/3.0-beta1/docs/pipeline_usage/pipeline_develop_guide_en.md) or the tutorials for each pipeline.
|
||||
|
||||
In addition, PaddleX provides developers with a full-process efficient model training and deployment tool based on a [cloud-based GUI](https://aistudio.baidu.com/pipeline/mine). Developers **do not need code development**, just need to prepare a dataset that meets the pipeline requirements to **quickly start model training**. For details, please refer to the tutorial ["Developing Industrial-level AI Models with Zero Barrier"](https://aistudio.baidu.com/practical/introduce/546656605663301).
|
||||
|
||||
@ -20,7 +20,7 @@ In addition, PaddleX provides developers with a full-process efficient model tra
|
||||
<th>Local Inference</th>
|
||||
<th>High-Performance Inference</th>
|
||||
<th>Service-Oriented Deployment</th>
|
||||
<th>Edge Deployment</th>
|
||||
<th>On-Device Deployment</th>
|
||||
<th>Custom Development</th>
|
||||
<th><a href="https://aistudio.baidu.com/pipeline/mine">No-Code Development On AI Studio</a></td>
|
||||
</tr>
|
||||
|
@ -16,7 +16,7 @@ hide:
|
||||
|
||||
## 2. OCR相关能力支持
|
||||
|
||||
PaddleX 中 OCR 相关的 10 条产线均支持本地**快速推理**,部分产线支持**在线体验**,您可以快速体验各个产线的预训练模型效果,如果您对产线的预训练模型效果满意,可以直接对产线进行[高性能推理](https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/high_performance_inference.html)/[服务化部署](https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/service_deploy.html)/[端侧部署](https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/edge_deploy.html),如果不满意,您也可以使用产线的**二次开发**能力,提升效果。完整的产线开发流程请参考[PaddleX产线使用概览](https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/pipeline_develop_guide.html)或各产线使用教程。
|
||||
PaddleX 中 OCR 相关的 10 条产线均支持本地**快速推理**,部分产线支持**在线体验**,您可以快速体验各个产线的预训练模型效果,如果您对产线的预训练模型效果满意,可以直接对产线进行[高性能推理](https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/high_performance_inference.html)/[服务化部署](https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/service_deploy.html)/[端侧部署](https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/on_device_deployment.html),如果不满意,您也可以使用产线的**二次开发**能力,提升效果。完整的产线开发流程请参考[PaddleX产线使用概览](https://paddlepaddle.github.io/PaddleX/latest/pipeline_usage/pipeline_develop_guide.html)或各产线使用教程。
|
||||
|
||||
此外,PaddleX为开发者提供了基于[云端图形化开发界面](https://aistudio.baidu.com/pipeline/mine)的全流程开发工具, 详细请参考[教程《零门槛开发产业级AI模型》](https://aistudio.baidu.com/practical/introduce/546656605663301)
|
||||
|
||||
|
@ -110,7 +110,7 @@ plugins:
|
||||
推理部署: Model Deploy
|
||||
高性能推理: High-Performance Inference
|
||||
获取onnx模型: Obtaining ONNX Models
|
||||
端侧部署: Edge Deployment
|
||||
端侧部署: On-Device Deployment
|
||||
服务化部署: Serving Deployment
|
||||
模块列表: Module List
|
||||
文档图像方向分类模块: Document Image Orientation Classification Module
|
||||
|
Loading…
x
Reference in New Issue
Block a user