guoshengjian 1b265d0455
[Docs] Fix documentation (#15477)
* modify docs0528

* modify docs0529

* modify docs0529-2

* modify docs0529-3

* change 960,max to 64,min

* Proofread Chinese, English documents. modify ocr.py and test_ocr.py

* modify pipeline to Pipeline

* check-code-style

* check-code-style-2

* modify doc_preprocessor.md

* fix check-code-style

* modify dict

* modify languages

* modify languages

* modify docs
2025-06-05 17:43:57 +08:00
2025-03-28 16:28:38 +08:00
2025-06-01 18:31:22 +08:00
2025-06-05 17:43:57 +08:00
2025-05-19 03:02:35 +08:00
2025-05-06 22:15:01 +08:00
2020-05-10 16:26:57 +08:00
2025-05-20 16:49:56 +08:00
2025-05-20 18:21:57 +08:00
2025-06-05 11:24:31 +08:00
2025-06-05 11:24:31 +08:00

PaddleOCR Banner

中文| English

stars Downloads python os hardware

Website AI Studio AI Studio AI Studio

🚀 Introduction

Since its initial release, PaddleOCR has gained widespread acclaim across academia, industry, and research communities, thanks to its cutting-edge algorithms and proven performance in real-world applications. Its already powering popular open-source projects like Umi-OCR, OmniParser, MinerU, and RAGFlow, making it the go-to OCR toolkit for developers worldwide.

On May 20, 2025, the PaddlePaddle team unveiled PaddleOCR 3.0, fully compatible with the official release of the PaddlePaddle 3.0 framework. This update further boosts text-recognition accuracy, adds support for multiple text-type recognition and handwriting recognition, and meets the growing demand from large-model applications for high-precision parsing of complex documents. When combined with the ERNIE 4.5T, it significantly enhances key-information extraction accuracy. PaddleOCR 3.0 also introduces support for domestic hardware platforms such as KUNLUNXIN and Ascend. For the complete usage documentation, please refer to the PaddleOCR 3.0 Documentation.

Three Major New Features in PaddleOCR 3.0:

  • Universal-Scene Text Recognition Model PP-OCRv5: A single model that handles five different text types plus complex handwriting. Overall recognition accuracy has increased by 13 percentage points over the previous generation. Online Demo

  • General Document-Parsing Solution PP-StructureV3: Delivers high-precision parsing of multi-layout, multi-scene PDFs, outperforming many open- and closed-source solutions on public benchmarks. Online Demo

  • Intelligent Document-Understanding Solution PP-ChatOCRv4: Natively powered by the WenXin large model 4.5T, achieving 15 percentage points higher accuracy than its predecessor. Online Demo

In addition to providing an outstanding model library, PaddleOCR 3.0 also offers user-friendly tools covering model training, inference, and service deployment, so developers can rapidly bring AI applications to production.

PaddleOCR Architecture

📣 Recent updates

🔥🔥 2025.06.05: Release of PaddleOCR 3.0.1, includes:

  • Optimisation of certain models and model configurations:

    • Updated the default model configuration for PP-OCRv5, changing both detection and recognition from mobile to server models. To improve default performance in most scenarios, the parameter limit_side_len in the configuration has been changed from 736 to 64.
    • Added a new text line orientation classification model PP-LCNet_x1_0_textline_ori with an accuracy of 99.42%. The default text line orientation classifier for OCR, PP-StructureV3, and PP-ChatOCRv4 pipelines has been updated to this model.
    • Optimised the text line orientation classification model PP-LCNet_x0_25_textline_ori, improving accuracy by 3.3 percentage points to a current accuracy of 98.85%.
  • Optimizations and fixes for some issues in version 3.0.0, details

🔥🔥2025.05.20: Official Release of PaddleOCR v3.0, including:

  • PP-OCRv5: High-Accuracy Text Recognition Model for All Scenarios - Instant Text from Images/PDFs.

    1. 🌐 Single-model support for five text types - Seamlessly process Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English and Japanese within a single model.
    2. ✍️ Improved handwriting recognition: Significantly better at complex cursive scripts and non-standard handwriting.
    3. 🎯 13-point accuracy gain over PP-OCRv4, achieving state-of-the-art performance across a variety of real-world scenarios.
  • PP-StructureV3: General-Purpose Document Parsing Unleash SOTA Images/PDFs Parsing for Real-World Scenarios!

    1. 🧮 High-Accuracy multi-scene PDF parsing, leading both open- and closed-source solutions on the OmniDocBench benchmark.
    2. 🧠 Specialized capabilities include seal recognition, chart-to-table conversion, table recognition with nested formulas/images, vertical text document parsing, and complex table structure analysis.
  • PP-ChatOCRv4: Intelligent Document Understanding Extract Key Information, not just text from Images/PDFs.

    1. 🔥 15-point accuracy gain in key-information extraction on PDF/PNG/JPG files over the previous generation.
    2. 💻 Native support for ERINE4.5 Turbo, with compatibility for large-model deployments via PaddleNLP, Ollama, vLLM, and more.
    3. 🤝 Integrated PP-DocBee2, enabling extraction and understanding of printed text, handwriting, seals, tables, charts, and other common elements in complex documents.
The history of updates
  • 🔥🔥2025.03.07: Release of PaddleOCR v2.10, including:

    • 12 new self-developed models:
      • Layout Detection series(3 models): PP-DocLayout-L, M, and S -- capable of detecting 23 common layout types across diverse document formats(papers, reports, exams, books, magazines, contracts, etc.) in English and Chinese. Achieves up to 90.4% mAP@0.5 , and lightweight features can process over 100 pages per second.
      • Formula Recognition series(2 models): PP-FormulaNet-L and S -- supports recognition of 50,000+ LaTeX expressions, handling both printed and handwritten formulas. PP-FormulaNet-L offers 6% higher accuracy than comparable models; PP-FormulaNet-S is 16x faster while maintaining similar accuracy.
      • Table Structure Recognition series(2 models): SLANeXt_wired and SLANeXt_wireless -- newly developed models with 6% accuracy improvement over SLANet_plus in complex table recognition.
      • Table Classification(1 model): PP-LCNet_x1_0_table_cls -- an ultra-lightweight classifier for wired and wireless tables.

Learn more

Quick Start

1. Run online demo

AI Studio AI Studio AI Studio

2. Installation

Install PaddlePaddle refer to Installation Guide, after then, install the PaddleOCR toolkit.

# Install paddleocr
pip install paddleocr

3. Run inference by CLI

# Run PP-OCRv5 inference
paddleocr ocr -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --use_doc_orientation_classify False --use_doc_unwarping False --use_textline_orientation False  

# Run PP-StructureV3 inference
paddleocr pp_structurev3 -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png --use_doc_orientation_classify False --use_doc_unwarping False

# Get the Qianfan API Key at first, and then run PP-ChatOCRv4 inference
paddleocr pp_chatocrv4_doc -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png -k 驾驶室准乘人数 --qianfan_api_key your_api_key --use_doc_orientation_classify False --use_doc_unwarping False 

# Get more information about "paddleocr ocr"
paddleocr ocr --help

4. Run inference by API

4.1 PP-OCRv5 Example

# Initialize PaddleOCR instance
ocr = PaddleOCR(
    use_doc_orientation_classify=False,
    use_doc_unwarping=False,
    use_textline_orientation=False)

# Run OCR inference on a sample image 
result = ocr.predict(
    input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png")

# Visualize the results and save the JSON results
for res in result:
    res.print()
    res.save_to_img("output")
    res.save_to_json("output")
4.2 PP-StructureV3 Example
from pathlib import Path
from paddleocr import PPStructureV3

pipeline = PPStructureV3()

# For Image
output = pipeline.predict(
    input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png",
    use_doc_orientation_classify=False,
    use_doc_unwarping=False
    )

# Visualize the results and save the JSON results
for res in output:
    res.print() 
    res.save_to_json(save_path="output") 
    res.save_to_markdown(save_path="output")           
4.3 PP-ChatOCRv4 Example
from paddleocr import PPChatOCRv4Doc

chat_bot_config = {
    "module_name": "chat_bot",
    "model_name": "ernie-3.5-8k",
    "base_url": "https://qianfan.baidubce.com/v2",
    "api_type": "openai",
    "api_key": "api_key",  # your api_key
}

retriever_config = {
    "module_name": "retriever",
    "model_name": "embedding-v1",
    "base_url": "https://qianfan.baidubce.com/v2",
    "api_type": "qianfan",
    "api_key": "api_key",  # your api_key
}

pipeline = PPChatOCRv4Doc(
    use_doc_orientation_classify=False,
    use_doc_unwarping=False
)

visual_predict_res = pipeline.visual_predict(
    input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png",
    use_common_ocr=True,
    use_seal_recognition=True,
    use_table_recognition=True,
)

mllm_predict_info = None
use_mllm = False
# If a multimodal large model is used, the local mllm service needs to be started. You can refer to the documentation: https://github.com/PaddlePaddle/PaddleX/blob/release/3.0/docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.m d performs deployment and updates the mllm_chat_bot_config configuration.
if use_mllm:
    mllm_chat_bot_config = {
        "module_name": "chat_bot",
        "model_name": "PP-DocBee",
        "base_url": "http://127.0.0.1:8080/",  # your local mllm service url
        "api_type": "openai",
        "api_key": "api_key",  # your api_key
    }

    mllm_predict_res = pipeline.mllm_pred(
        input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png",
        key_list=["驾驶室准乘人数"],
        mllm_chat_bot_config=mllm_chat_bot_config,
    )
    mllm_predict_info = mllm_predict_res["mllm_res"]

visual_info_list = []
for res in visual_predict_res:
    visual_info_list.append(res["visual_info"])
    layout_parsing_result = res["layout_parsing_result"]

vector_info = pipeline.build_vector(
    visual_info_list, flag_save_bytes_vector=True, retriever_config=retriever_config
)
chat_result = pipeline.chat(
    key_list=["驾驶室准乘人数"],
    visual_info=visual_info_list,
    vector_info=vector_info,
    mllm_predict_info=mllm_predict_info,
    chat_bot_config=chat_bot_config,
    retriever_config=retriever_config,
)
print(chat_result)

5. Domestic AI Accelerators

⛰️ Advanced Tutorials

🔄 Quick Overview of Execution Results

PP-OCRv5 Demo

PP-StructureV3 Demo

👩‍👩‍👧‍👦 Community

PaddlePaddle WeChat official account Join the tech discussion group

😃 Awesome Projects Leveraging PaddleOCR

PaddleOCR wouldnt be where it is today without its incredible community! 💗 A massive thank you to all our longtime partners, new collaborators, and everyone whos poured their passion into PaddleOCR — whether weve named you or not. Your support fuels our fire!

Project Name Description
RAGFlow RAG engine based on deep document understanding.
MinerU Multi-type Document to Markdown Conversion Tool
Umi-OCR Free, Open-source, Batch Offline OCR Software.
OmniParser OmniParser: Screen Parsing tool for Pure Vision Based GUI Agent.
QAnything Question and Answer based on Anything.
PDF-Extract-Kit A powerful open-source toolkit designed to efficiently extract high-quality content from complex and diverse PDF documents.
Dango-Translator Recognize text on the screen, translate it and show the translation results in real time.
Learn more projects More projects based on PaddleOCR

👩‍👩‍👧‍👦 Contributors

🌟 Star

Star History Chart

📄 License

This project is released under the Apache 2.0 license.

🎓 Citation

@misc{paddleocr2020,
title={PaddleOCR, Awesome multilingual OCR toolkits based on PaddlePaddle.},
author={PaddlePaddle Authors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleOCR}},
year={2020}
}
Description
Awesome multilingual OCR and Document Parsing toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
Readme Apache-2.0 7.4 GiB
Languages
Python 79%
C++ 12.9%
Shell 5.4%
Java 1.2%
Cuda 0.4%
Other 0.9%