mirror of
https://github.com/PaddlePaddle/PaddleOCR.git
synced 2025-12-27 06:58:20 +00:00
[Docs] Add upgrade notes and fix docs (#15198)
* Unify refs * Fix extra newline * Update mkldnn_blocklists * Add upgrade notes * Fix docs * Add English upgrade notes * Bump paddlex to 3.0.0 * Update upgrade notes
This commit is contained in:
parent
c8f5a31f10
commit
eac5578fe2
81
docs/upgrade_notes.en.md
Normal file
81
docs/upgrade_notes.en.md
Normal file
@ -0,0 +1,81 @@
|
||||
# PaddleOCR 3.x Upgrade Notes
|
||||
|
||||
## 1. Why Upgrade from PaddleOCR 2.x to 3.x?
|
||||
|
||||
Since the release of PaddleOCR 2.0 in February 2021, the community has experienced over four years of rapid growth. The number of GitHub stars, community users and contributors, as well as issues and PRs, have all increased exponentially. With emerging needs such as multilingual recognition and layout analysis, PaddleOCR continued to expand its capabilities in the 2.x series. However, the original lightweight-centric architecture has struggled to accommodate the growing complexity and rising maintenance costs brought by the feature boom.
|
||||
|
||||
As more module branches and "bridging" layers were added to the codebase, issues such as code duplication and inconsistent interfaces became increasingly prominent. Testing became more difficult, and development efficiency was severely constrained. In addition, legacy dependencies became incompatible with newer versions of PaddlePaddle, limiting access to its latest features and slowing down training and inference. Under such circumstances, continuing to patch the existing architecture would only increase technical debt and system fragility.
|
||||
|
||||
Meanwhile, Transformer-based vision-language models are injecting new momentum into advanced scenarios such as document understanding, image-text summarization, and intelligent proofreading. The community is eager to go beyond traditional OCR recognition and fully harness the powerful contextual understanding and reasoning capabilities of these models. At the same time, lightweight OCR models can still work in tandem with large models—both supporting the input needs of large models in document parsing and achieving complementary strengths to further enhance overall system performance.
|
||||
|
||||
Moreover, the official release of PaddlePaddle 3.0 in April 2025 brought groundbreaking upgrades in unified training/inference and domestic hardware adaptation. This calls for a significant update to PaddleOCR in both its training and inference components.
|
||||
|
||||
Given this background, we’ve decided to implement a major, non-backward-compatible upgrade—transitioning from 2.x to 3.x. The new version introduces a modular and plugin-based architecture. While retaining familiar usage patterns for users as much as possible, it integrates large model capabilities, offers richer features, and leverages the latest advancements of PaddlePaddle 3.0. The result is reduced maintenance cost, improved performance, and a solid foundation for future feature expansion.
|
||||
|
||||
## 2. Key Upgrades from PaddleOCR 2.x to 3.x
|
||||
|
||||
The 3.x upgrade consists of three major enhancements:
|
||||
|
||||
1. **New Model Pipelines**: Introduced several new pipelines such as PP-OCRv5, PP-StructureV3, and PP-ChatOCR v4, covering a wide range of base models. These significantly enhance recognition capabilities for various text types, including handwriting, to meet the growing demand for high-precision parsing in complex documents. All models are ready-to-use out of the box, improving development efficiency.
|
||||
2. **Refactored Deployment and Unified Inference Interface**: The deployment module in PaddleOCR 3.x is rebuilt using PaddleX’s underlying capabilities, fixing design flaws from 2.x and unifying both Python APIs and CLI interfaces. The deployment now supports three main scenarios: high-performance inference, service-oriented deployment, and edge deployment.
|
||||
3. **PaddlePaddle 3.0 Compatibility and Optimized Training**: The new version is fully compatible with PaddlePaddle 3.0, including features like the CINN compiler. It also introduces a standardized model naming system to streamline future updates and maintenance.
|
||||
|
||||
Some legacy features from PaddleOCR 2.x remain partially supported in 3.x. For more information, refer to [Legacy Features](version2.x/legacy/index.en.md).
|
||||
|
||||
## 3. Migrating Inference Code from PaddleOCR 2.x to 3.x
|
||||
|
||||
For OCR tasks, PaddleOCR 3.x still supports a usage pattern similar to 2.x. Here’s an example using the Python API in 2.x:
|
||||
|
||||
```python
|
||||
from paddleocr import PaddleOCR
|
||||
|
||||
ocr = PaddleOCR(lang="en")
|
||||
result = ocr.ocr("img.png")
|
||||
for res in result:
|
||||
for line in res:
|
||||
print(line)
|
||||
|
||||
# Visualization
|
||||
from PIL import Image
|
||||
from paddleocr import draw_ocr
|
||||
result = result[0]
|
||||
image = Image.open(img_path).convert("RGB")
|
||||
boxes = [line[0] for line in result]
|
||||
txts = [line[1][0] for line in result]
|
||||
scores = [line[1][1] for line in result]
|
||||
im_show = draw_ocr(image, boxes, txts, scores, font_path="simfang.ttf")
|
||||
im_show = Image.fromarray(im_show)
|
||||
im_show.save("result.jpg")
|
||||
```
|
||||
|
||||
In PaddleOCR 3.x, this workflow is further simplified:
|
||||
|
||||
```python
|
||||
from paddleocr import PaddleOCR
|
||||
|
||||
# Basic initialization parameters remain the same
|
||||
ocr = PaddleOCR(lang="en")
|
||||
result = ocr.ocr("img.png")
|
||||
# Or use the new unified interface
|
||||
# result = ocr.predict("img.png")
|
||||
for res in result:
|
||||
# Directly print recognition results, no nested loops required
|
||||
res.print()
|
||||
|
||||
# Visualization and saving results are simpler
|
||||
res.save_to_img("result")
|
||||
```
|
||||
|
||||
It’s worth noting that the `PPStructure` module in PaddleOCR 2.x has been removed in 3.x. We recommend switching to `PPStructureV3`, which offers richer functionality and better parsing results. Refer to the relevant documentation for usage details.
|
||||
|
||||
Also, in 2.x, the `show_log` parameter could be passed when creating a `PaddleOCR` object to control logging. However, this design affected all `PaddleOCR` instances due to the use of a shared logger—clearly not the expected behavior. PaddleOCR 3.x introduces a brand-new logging system to address this issue. For more details, see [Logging](version3.x/logging.en.md).
|
||||
|
||||
## 4. Known Issues in PaddleOCR 3.x
|
||||
|
||||
PaddleOCR 3.x is still under active development. Current known limitations include:
|
||||
|
||||
1. Incomplete support for native C++ deployment.
|
||||
2. High-performance service-oriented deployment is not yet on par with PaddleServing in 2.x.
|
||||
3. Edge deployment currently supports only a subset of key models, with broader support pending.
|
||||
|
||||
If you encounter any issues during use, feel free to submit feedback via GitHub issues. We also warmly welcome more community members to contribute to PaddleOCR's future. Thank you for your continued support and interest!
|
||||
81
docs/upgrade_notes.md
Normal file
81
docs/upgrade_notes.md
Normal file
@ -0,0 +1,81 @@
|
||||
# PaddleOCR 3.x 升级说明
|
||||
|
||||
## 1. PaddleOCR 为什么要从 2.x 升级到 3.x?
|
||||
|
||||
自 2021 年 2 月发布 2.0 版本以来,PaddleOCR 社区已走过四年多的快速发展期,GitHub star 数量、社区用户和贡献者、issue 与 PR 数量等均有指数级的增长。在多语种识别、版面分析等新需求的推动下,PaddleOCR 在 2.x 系列中不断增添功能,但最初以轻量化为核心的架构已难以应对功能繁荣带来的复杂性与维护成本。
|
||||
|
||||
随着代码中模块分支与“桥接”层频繁增加,重复实现、接口不统一的问题愈发突出,测试难度也不断增加,开发效率严重受限;而旧版依赖与最新 PaddlePaddle 更新的不兼容,限制了对飞桨新特性的使用,进一步拖慢了训练与推理速度。这种状况下,继续在现有架构基础上打补丁,只会带来更多技术债与系统脆弱性。
|
||||
|
||||
另一方面,基于 Transformer 的视觉语言大模型正在为文档理解、图文摘要、智能校对等高级应用场景注入新动能。社区迫切期待,这类模型能够突破传统 OCR 识别的局限,直接发挥其更强的上下文理解与推理能力。同时,传统的 OCR 小模型亦可与大模型协同工作,既能满足大模型在文档解析等方面的输入需求,又能通过大小模型协同,实现优势互补,进一步提升系统整体性能。
|
||||
|
||||
此外,飞桨框架于 2025 年 4 月发布的 3.0 正式版,在训推一体化、国产硬件适配等方面实现了颠覆性升级,这也对 PaddleOCR 在训练和推理层面提出了新的改造需求。
|
||||
|
||||
综合以上背景,我们决定对 PaddleOCR 进行一次重大、非兼容性升级——从 2.x 跳至 3.x。新版本将在架构层面实现模块化、插件化设计,在尽可能不改变用户使用习惯的同时,结合大模型,提供更加丰富的功能,并充分利用飞桨 3.0 的新特性,既清理冗余、降低维护成本,又为性能与功能扩展提供更坚实的基础。
|
||||
|
||||
## 2. PaddleOCR 2.x 到 3.x 主要升级内容
|
||||
|
||||
本次升级内容主要可分为三个部分:
|
||||
|
||||
1. **新增多条模型产线**:推出 PP-OCRv5、PP-StructureV3、PP-ChatOCR v4 等多条模型产线,并补充覆盖多种方向的基础模型,重点增强了多文字类型识别、手写体识别等能力,满足大模型应用对复杂文档高精度解析的旺盛需求。用户可直接开箱使用,提升开发效率。
|
||||
2. **重构部署能力,统一推理接口**:PaddleOCR 3.x 融合了飞桨 [PaddleX](version3.x/paddleocr_and_paddlex.md) 工具的底层能力,全面升级推理、部署模块,修正 2.x 版本中的设计错误,统一并优化了 Python API 和命令行接口(CLI)。部署能力现覆盖高性能推理、服务化部署及端侧部署三大场景。
|
||||
3. **适配飞桨 3.0,优化训练流程**:新版本已兼容飞桨 3.0 的 CINN 编译器等最新特性,并对模型命名体系进行了更新,采用更规范、统一的命名规则,为后续迭代与维护奠定基础。
|
||||
|
||||
对于 PaddleOCR 2.x 中的部分历史遗留功能,PaddleOCR 3.x 目前仍提供了一定程度的兼容支持。详情请参阅 [历史遗留功能](version2.x/legacy/index.md)。
|
||||
|
||||
## 3. 将 PaddleOCR 2.x 的推理代码移到 PaddleOCR 3.x
|
||||
|
||||
对于 OCR 任务,PaddleOCR 3.x 仍然支持与 PaddleOCR 2.x 类似的用法。以 Python API 为例,以下是 PaddleOCR 2.x 的常见使用方式:
|
||||
|
||||
```python
|
||||
from paddleocr import PaddleOCR
|
||||
|
||||
ocr = PaddleOCR(lang="en")
|
||||
result = ocr.ocr("img.png")
|
||||
for res in result:
|
||||
for line in res:
|
||||
print(line)
|
||||
|
||||
# 可视化
|
||||
from PIL import Image
|
||||
from paddleocr import draw_ocr
|
||||
result = result[0]
|
||||
image = Image.open(img_path).convert("RGB")
|
||||
boxes = [line[0] for line in result]
|
||||
txts = [line[1][0] for line in result]
|
||||
scores = [line[1][1] for line in result]
|
||||
im_show = draw_ocr(image, boxes, txts, scores, font_path="simfang.ttf")
|
||||
im_show = Image.fromarray(im_show)
|
||||
im_show.save("result.jpg")
|
||||
```
|
||||
|
||||
在 PaddleOCR 3.x 中,以上流程得到了进一步简化,示例如下:
|
||||
|
||||
```python
|
||||
from paddleocr import PaddleOCR
|
||||
|
||||
# 基础的初始化参数保持一致
|
||||
ocr = PaddleOCR(lang="en")
|
||||
result = ocr.ocr("img.png")
|
||||
# 也可以使用新的统接口
|
||||
# result = ocr.predict("img.png")
|
||||
for res in result:
|
||||
# 可直接调用方法打印识别结果,无需嵌套循环
|
||||
res.print()
|
||||
|
||||
# 可视化及结果保存更为简洁
|
||||
res.save_to_img("result")
|
||||
```
|
||||
|
||||
需要特别指出的是,PaddleOCR 2.x 提供的 `PPStructure` 在 PaddleOCR 3.x 中已被移除。建议使用功能更丰富、解析效果更好的 `PPStructureV3` 替代,并参考相关文档了解新接口的用法。
|
||||
|
||||
此外,在 PaddleOCR 2.x 中,可以通过在构造 `PaddleOCR` 对象时传入 `show_log` 参数来控制日志输出。然而,这种设计存在局限:由于所有 `PaddleOCR` 实例共享一个日志器,当一个实例设置了日志行为后,其它实例也会受到影响,这显然不符合预期。为了解决这一问题,PaddleOCR 3.x 引入了全新的日志系统。详细内容请参阅 [日志](version3.x/logging.md)。
|
||||
|
||||
## 4. PaddleOCR 3.x 已知问题
|
||||
|
||||
PaddleOCR 3.x 仍在持续迭代与优化中,目前已知存在以下尚待完善之处:
|
||||
|
||||
1. 对 C++ 本地部署的支持尚不完整。
|
||||
2. 暂未提供性能与 PaddleOCR 2.x 中 PaddleServing 部署方案对齐的高性能服务化部署方案。
|
||||
3. 端侧部署目前仅支持部分重点模型,其余模型尚未开放支持。
|
||||
|
||||
如果你在使用过程中遇到问题,欢迎随时在 issue 区提交反馈。我们也诚挚邀请更多社区用户参与到 PaddleOCR 的建设中来,感谢大家一直以来的关注与支持!
|
||||
@ -1,19 +1,19 @@
|
||||
# Serving Deployment
|
||||
# Serving
|
||||
|
||||
Serving deployment is a common deployment method in real-world production environments. By encapsulating inference capabilities as services, clients can access these services via network requests to obtain inference results. PaddleOCR recommends using [PaddleX](https://github.com/PaddlePaddle/PaddleX) for serving deployment. Please refer to [Differences and Connections between PaddleOCR and PaddleX](../paddleocr_and_paddlex.en.md#1-Differences-and-Connections-Between-PaddleOCR-and-PaddleX) to understand the relationship between PaddleOCR and PaddleX.
|
||||
Serving is a common deployment method in real-world production environments. By encapsulating inference capabilities as services, clients can access these services via network requests to obtain inference results. PaddleOCR recommends using [PaddleX](https://github.com/PaddlePaddle/PaddleX) for serving. Please refer to [Differences and Connections between PaddleOCR and PaddleX](../paddleocr_and_paddlex.en.md#1-Differences-and-Connections-Between-PaddleOCR-and-PaddleX) to understand the relationship between PaddleOCR and PaddleX.
|
||||
|
||||
PaddleX provides the following serving deployment solutions:
|
||||
PaddleX provides the following serving solutions:
|
||||
|
||||
- **Basic Serving Deployment**: An easy-to-use serving deployment solution with low development costs.
|
||||
- **High-Stability Serving Deployment**: Built based on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server). Compared to the basic serving deployment, this solution offers higher stability and allows users to adjust configurations to optimize performance.
|
||||
- **Basic Serving**: An easy-to-use serving solution with low development costs.
|
||||
- **High-Stability Serving**: Built based on [NVIDIA Triton Inference Server](https://developer.nvidia.com/triton-inference-server). Compared to the basic serving, this solution offers higher stability and allows users to adjust configurations to optimize performance.
|
||||
|
||||
**It is recommended to first use the basic serving deployment solution for quick validation**, and then evaluate whether to try more complex solutions based on actual needs.
|
||||
**It is recommended to first use the basic serving solution for quick validation**, and then evaluate whether to try more complex solutions based on actual needs.
|
||||
|
||||
## 1. Basic Serving Deployment
|
||||
## 1. Basic Serving
|
||||
|
||||
### 1.1 Install Dependencies
|
||||
|
||||
Run the following command to install the PaddleX serving deployment plugin via PaddleX CLI:
|
||||
Run the following command to install the PaddleX serving plugin via PaddleX CLI:
|
||||
|
||||
```bash
|
||||
paddlex --install serving
|
||||
@ -44,7 +44,7 @@ INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
|
||||
|
||||
To adjust configurations (such as model path, batch size, deployment device, etc.), specify `--pipeline` as a custom configuration file. Refer to [PaddleOCR and PaddleX](../paddleocr_and_paddlex.en.md) for the mapping between PaddleOCR pipelines and PaddleX pipeline registration names, as well as how to obtain and modify PaddleX pipeline configuration files.
|
||||
|
||||
The command-line options related to serving deployment are as follows:
|
||||
The command-line options related to serving are as follows:
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
@ -85,8 +85,8 @@ The command-line options related to serving deployment are as follows:
|
||||
|
||||
The <b>"Development Integration/Deployment"</b> section in the PaddleOCR pipeline tutorial provides API references and multi-language invocation examples for the service.
|
||||
|
||||
## 2. High-Stability Serving Deployment
|
||||
## 2. High-Stability Serving
|
||||
|
||||
Please refer to the [PaddleX Serving Deployment Guide](https://paddlepaddle.github.io/PaddleX/3.0/en/pipeline_deploy/serving.html#2). More information about PaddleX pipeline configuration files can be found in [Using PaddleX Pipeline Configuration Files](../paddleocr_and_paddlex.en.md#3-using-paddlex-pipeline-configuration-files).
|
||||
Please refer to the [PaddleX Serving Guide](https://paddlepaddle.github.io/PaddleX/3.0/en/pipeline_deploy/serving.html#2). More information about PaddleX pipeline configuration files can be found in [Using PaddleX Pipeline Configuration Files](../paddleocr_and_paddlex.en.md#3-using-paddlex-pipeline-configuration-files).
|
||||
|
||||
It should be noted that, due to the lack of fine-grained optimization and other reasons, the current high-stability serving deployment solution provided by PaddleOCR may not match the performance of the 2.x version based on PaddleServing. However, this new solution fully supports the PaddlePaddle 3.0 framework. We will continue to optimize it and consider introducing more performant deployment solutions in the future.
|
||||
|
||||
@ -1350,9 +1350,9 @@ for res in result:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -2280,9 +2280,9 @@ for res in visual_predict_res:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -2235,9 +2235,9 @@ for item in markdown_images:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -558,9 +558,9 @@ for res in output:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -405,9 +405,9 @@ If you need to apply the pipeline directly to your Python project, you can refer
|
||||
|
||||
In addition, PaddleOCR also provides two other deployment methods, detailed descriptions are as follows:
|
||||
|
||||
🚀 High-Performance Inference: In real production environments, many applications have strict standards for the performance indicators of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleOCR provides high-performance inference capabilities, aiming to deeply optimize the performance of model inference and pre-and post-processing, achieving significant acceleration of the end-to-end process. For detailed high-performance inference processes, refer to the [High-Performance Inference Guide](../deployment/high_performance_inference.md).
|
||||
🚀 High-Performance Inference: In real production environments, many applications have strict standards for the performance indicators of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleOCR provides high-performance inference capabilities, aiming to deeply optimize the performance of model inference and pre-and post-processing, achieving significant acceleration of the end-to-end process. For detailed high-performance inference processes, refer to [High-Performance Inference](../deployment/high_performance_inference.md).
|
||||
|
||||
☁️ Service Deployment: Service deployment is a common form of deployment in real production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. For detailed pipeline service deployment processes, refer to the [Service Deployment Guide](../deployment/serving.md).
|
||||
☁️ Service Deployment: Service deployment is a common form of deployment in real production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. For detailed pipeline service deployment processes, refer to [Serving](../deployment/serving.md).
|
||||
|
||||
Below is the API reference for basic service deployment and examples of service invocation in multiple languages:
|
||||
|
||||
|
||||
@ -405,9 +405,9 @@ for res in output:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -1066,9 +1066,9 @@ for res in output:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -1584,9 +1584,9 @@ If you need to integrate the pipeline into your Python project, you can refer to
|
||||
|
||||
In addition, PaddleX also provides three other deployment methods, which are detailed as follows:
|
||||
|
||||
🚀 High-Performance Inference: In real-world production environments, many applications have stringent performance requirements for deployment strategies, especially in terms of response speed, to ensure efficient system operation and a smooth user experience. To address this, PaddleOCR offers high-performance inference capabilities aimed at deeply optimizing the performance of model inference and pre/post-processing, thereby significantly accelerating the end-to-end process. For detailed high-performance inference procedures, please refer to the [High-Performance Inference Guide](../deployment/high_performance_inference.md).
|
||||
🚀 High-Performance Inference: In real-world production environments, many applications have stringent performance requirements for deployment strategies, especially in terms of response speed, to ensure efficient system operation and a smooth user experience. To address this, PaddleOCR offers high-performance inference capabilities aimed at deeply optimizing the performance of model inference and pre/post-processing, thereby significantly accelerating the end-to-end process. For detailed high-performance inference procedures, please refer to [High-Performance Inference](../deployment/high_performance_inference.md).
|
||||
|
||||
☁️ Service Deployment: Service deployment is a common form of deployment in real-world production environments. By encapsulating inference functionality into a service, clients can access these services via network requests to obtain inference results. For detailed production service deployment procedures, please refer to the [Service Deployment Guide](../deployment/serving.md).
|
||||
☁️ Service Deployment: Service deployment is a common form of deployment in real-world production environments. By encapsulating inference functionality into a service, clients can access these services via network requests to obtain inference results. For detailed production service deployment procedures, please refer to [Serving](../deployment/serving.md).
|
||||
|
||||
Below are the API references for basic serving deployment and multi-language service invocation examples:
|
||||
|
||||
|
||||
@ -1501,9 +1501,9 @@ for res in output:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -1730,9 +1730,9 @@ If you need to apply the model directly in your Python project, you can refer to
|
||||
|
||||
Additionally, PaddleOCR provides two other deployment methods, which are described in detail below:
|
||||
|
||||
🚀 High-Performance Inference: In actual production environments, many applications have strict performance criteria (especially response speed) for deployment strategies to ensure efficient system operation and smooth user experience. To address this, PaddleOCR provides high-performance inference capabilities aimed at optimizing model inference and preprocessing to significantly speed up the end-to-end process. For detailed high-performance inference procedures, please refer to the [High-Performance Inference Guide](../deployment/high_performance_inference.md).
|
||||
🚀 High-Performance Inference: In actual production environments, many applications have strict performance criteria (especially response speed) for deployment strategies to ensure efficient system operation and smooth user experience. To address this, PaddleOCR provides high-performance inference capabilities aimed at optimizing model inference and preprocessing to significantly speed up the end-to-end process. For detailed high-performance inference procedures, please refer to [High-Performance Inference](../deployment/high_performance_inference.md).
|
||||
|
||||
☁️ Service-Oriented Deployment: Service-oriented deployment is a common form of deployment in actual production environments. By encapsulating inference functions as services, clients can access these services via network requests to obtain inference results. For detailed service-oriented deployment procedures, please refer to the [Service-Oriented Deployment Guide](../deployment/serving.md).
|
||||
☁️ Service-Oriented Deployment: Service-oriented deployment is a common form of deployment in actual production environments. By encapsulating inference functions as services, clients can access these services via network requests to obtain inference results. For detailed service-oriented deployment procedures, please refer to [Serving](../deployment/serving.md).
|
||||
|
||||
Below is the API reference for basic service-oriented deployment and examples of multilingual service calls:
|
||||
|
||||
|
||||
@ -1741,9 +1741,9 @@ for res in output:
|
||||
|
||||
此外,PaddleOCR 也提供了其他两种部署方式,详细说明如下:
|
||||
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理指南](../deployment/high_performance_inference.md)。
|
||||
🚀 高性能推理:在实际生产环境中,许多应用对部署策略的性能指标(尤其是响应速度)有着较严苛的标准,以确保系统的高效运行与用户体验的流畅性。为此,PaddleOCR 提供高性能推理功能,旨在对模型推理及前后处理进行深度性能优化,实现端到端流程的显著提速,详细的高性能推理流程请参考[高性能推理](../deployment/high_performance_inference.md)。
|
||||
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署指南](../deployment/serving.md)。
|
||||
☁️ 服务化部署:服务化部署是实际生产环境中常见的一种部署形式。通过将推理功能封装为服务,客户端可以通过网络请求来访问这些服务,以获取推理结果。详细的产线服务化部署流程请参考[服务化部署](../deployment/serving.md)。
|
||||
|
||||
以下是基础服务化部署的API参考与多语言服务调用示例:
|
||||
|
||||
|
||||
@ -16,7 +16,6 @@ import argparse
|
||||
import logging
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import warnings
|
||||
|
||||
from ._models import (
|
||||
|
||||
@ -25,4 +25,7 @@ MODEL_MKLDNN_BLOCKLIST = [
|
||||
"PP-FormulaNet-L",
|
||||
"PP-FormulaNet-S",
|
||||
"UniMERNet",
|
||||
"PP-FormulaNet_plus-L",
|
||||
"PP-FormulaNet_plus-M",
|
||||
"PP-FormulaNet_plus-S",
|
||||
]
|
||||
|
||||
@ -39,7 +39,7 @@ classifiers = [
|
||||
"Topic :: Utilities",
|
||||
]
|
||||
dependencies = [
|
||||
"paddlex[ocr,ie,multimodal]==3.0.0rc1",
|
||||
"paddlex[ocr,ie,multimodal]==3.0.0",
|
||||
"PyYAML>=6",
|
||||
"typing-extensions>=4.12",
|
||||
]
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user