🚀 Introduction
Since its initial release, PaddleOCR has gained widespread acclaim across academia, industry, and research communities, thanks to its cutting-edge algorithms and proven performance in real-world applications. It’s already powering popular open-source projects like Umi-OCR, OmniParser, MinerU, and RAGFlow, making it the go-to OCR toolkit for developers worldwide.
On May 20, 2025, the PaddlePaddle team unveiled PaddleOCR 3.0, fully compatible with the official release of the PaddlePaddle 3.0 framework. This update further boosts text-recognition accuracy, adds support for multiple text-type recognition and handwriting recognition, and meets the growing demand from large-model applications for high-precision parsing of complex documents. When combined with the ERNIE 4.5T, it significantly enhances key-information extraction accuracy. PaddleOCR 3.0 also introduces support for domestic hardware platforms such as KUNLUNXIN and Ascend.
Three Major New Features in PaddleOCR 3.0
-
🖼️ Universal-Scene Text Recognition Model PP-OCRv5: A single model that handles five different text types plus complex handwriting. Overall recognition accuracy has increased by 13 percentage points over the previous generation.
-
🧮 General Document-Parsing Solution PP-StructureV3: Delivers high-precision parsing of multi-layout, multi-scene PDFs, outperforming many open- and closed-source solutions on public benchmarks.
-
📈 Intelligent Document-Understanding Solution PP-ChatOCRv4: Natively powered by the WenXin large model 4.5T, achieving 15 percentage points higher accuracy than its predecessor.
In addition to providing an outstanding model library, PaddleOCR 3.0 also offers user-friendly tools covering model training, inference, and service deployment, so developers can rapidly bring AI applications to production.
📣 Recent updates
🔥🔥2025.05.20: Official Release of PaddleOCR v3.0, including:
-
PP-OCRv5: High-Accuracy Text Recognition Model for All Scenarios - Instant Text from Images/PDFs.
- 🌐 Single-model support for five text types - Seamlessly process Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English and Japanse within a single model.
- ✍️ Improved handwriting recognition: Significantly better at complex cursive scripts and non-standard handwriting.
- 🎯 13-point accuracy gain over PP-OCRv4, achieving state-of-the-art performance across a variety of real-world scenarios.
-
PP-StructureV3: General-Purpose Document Parsing – Unleash SOTA Images/PDFs Parsing for Real-World Scenarios!
- 🧮 High-Accuracy multi-scene PDF parsing, leading both open- and closed-source solutions on the OmniDocBench benchmark.
- 🧠 Specialized capabilities include seal recognition, chart-to-table conversion, table recognition with nested formulas/images, vertical text document parsing, and complex table structure analysis.
-
PP-ChatOCRv4: Intelligent Document Understanding – Extract Key Information, not just text from Images/PDFs.
- 🔥 15-point accuracy gain in key-information extraction on PDF/PNG/JPG files over the previous generation.
- 💻 Native support for ERINE4.5 Turbo, with compatibility for large-model deployments via PaddleNLP, Ollama, vLLM, and more.
- 🤝 Integrated PP-DocBee2, enabling extraction and understanding of printed text, handwriting, seals, tables, charts, and other common elements in complex documents.
The history of updates
-
🔥🔥2025.03.07: Release of PaddleOCR v2.10, including:
- 12 new self-developed models:
- Layout Detection series(3 models): PP-DocLayout-L, M, and S -- capable of detecting 23 common layout types across diverse document formats(papers, reports, exams, books, magazines, contracts, etc.) in English and Chinese. Achieves up to 90.4% mAP@0.5 , and lightweight features can process over 100 pages per second.
- Formula Recognition series(2 models): PP-FormulaNet-L and S -- supports recognition of 50,000+ LaTeX expressions, handling both printed and handwritten formulas. PP-FormulaNet-L offers 6% higher accuracy than comparable models; PP-FormulaNet-S is 16x faster while maintaining similar accuracy.
- Table Structure Recognition series(2 models): SLANeXt_wired and SLANeXt_wireless -- newly developed models with 6% accuracy improvement over SLANet_plus in complex table recognition.
- Table Classification(1 model): PP-LCNet_x1_0_table_cls -- an ultra-lightweight classifier for wired and wireless tables.
- 12 new self-developed models:
⚡ Quick Start
1. Run online demo without installation
2. Installation
First, please install PaddlePaddle using the official Installation Guide.
Then, install the PaddleOCR toolkit.
# 1. Install paddleocr
pip install paddleocr
# 2. Self-check after installation is complete
paddleocr --version
3. Domestic AI Accelerators
Model | Ascend | KUNLUNXIN | More...under development |
---|---|---|---|
PP-OCRv5 | ✅ | ✅ | |
PP-StructureV3 | ✅ | ✅ | |
PP-ChatOCRv4 | ✅ | ✅ |
3. Run inference by CLI
# Run PP-OCRv5 inference
paddleocr ocr -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png
# Run PP-StructureV3 inference
paddleocr PP-StructureV3 -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png
# Get the Qianfan API Key at first, and then run PP-ChatOCRv4 inference
paddleocr pp_chatocrv4_doc -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png -k 驾驶室准乘人数 --qianfan_api_key your_api_key
# Get more information about "paddleocr ocr"
paddleocr ocr --help
4. Run inference by API
4.1 PP-OCRv5 Example
from paddleocr import PaddleOCR
# Initialize PaddleOCR instance
ocr = PaddleOCR()
# Run OCR inference on a sample image
result = ocr.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png")
# Visualize the results and save the JSON results
for res in result:
res.print()
res.save_to_img("output")
res.save_to_json("output")
4.2 PP-StructureV3 Example
from pathlib import Path
from paddleocr import PPStructureV3
pipeline = PPStructureV3()
# For Image
output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png")
# Visualize the results and save the JSON results
for res in output:
res.print()
res.save_to_json(save_path="output")
res.save_to_markdown(save_path="output")
# For PDF File
input_file = "./your_pdf_file.pdf"
output_path = Path("./output")
output = pipeline.predict(input_file)
markdown_list = []
markdown_images = []
for res in output:
md_info = res.markdown
markdown_list.append(md_info)
markdown_images.append(md_info.get("markdown_images", {}))
markdown_texts = pipeline.concatenate_markdown_pages(markdown_list)
mkd_file_path = output_path / f"{Path(input_file).stem}.md"
mkd_file_path.parent.mkdir(parents=True, exist_ok=True)
with open(mkd_file_path, "w", encoding="utf-8") as f:
f.write(markdown_texts)
for item in markdown_images:
if item:
for path, image in item.items():
file_path = output_path / path
file_path.parent.mkdir(parents=True, exist_ok=True)
image.save(file_path)
4.3 PP-ChatOCRv4 Example
from paddleocr import PPChatOCRv4Doc
chat_bot_config = {
"module_name": "chat_bot",
"model_name": "ernie-3.5-8k",
"base_url": "https://qianfan.baidubce.com/v2",
"api_type": "openai",
"api_key": "api_key", # your api_key
}
retriever_config = {
"module_name": "retriever",
"model_name": "embedding-v1",
"base_url": "https://qianfan.baidubce.com/v2",
"api_type": "qianfan",
"api_key": "api_key", # your api_key
}
mllm_chat_bot_config = {
"module_name": "chat_bot",
"model_name": "PP-DocBee",
"base_url": "http://127.0.0.1:8080/", # your local mllm service url
"api_type": "openai",
"api_key": "api_key", # your api_key
}
pipeline = PPChatOCRv4Doc()
visual_predict_res = pipeline.visual_predict(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png",
use_doc_orientation_classify=False,
use_doc_unwarping=False,
use_common_ocr=True,
use_seal_recognition=True,
use_table_recognition=True,
)
visual_info_list = []
for res in visual_predict_res:
visual_info_list.append(res["visual_info"])
layout_parsing_result = res["layout_parsing_result"]
vector_info = pipeline.build_vector(
visual_info_list, flag_save_bytes_vector=True, retriever_config=retriever_config
)
mllm_predict_res = pipeline.mllm_pred(
input="vehicle_certificate-1.png",
key_list=["驾驶室准乘人数"],
mllm_chat_bot_config=mllm_chat_bot_config,
)
mllm_predict_info = mllm_predict_res["mllm_res"]
chat_result = pipeline.chat(
key_list=["驾驶室准乘人数"],
visual_info=visual_info_list,
vector_info=vector_info,
mllm_predict_info=mllm_predict_info,
chat_bot_config=chat_bot_config,
retriever_config=retriever_config,
)
print(chat_result)
😃 Awesome Projects Leveraging PaddleOCR
💗 PaddleOCR wouldn’t be where it is today without its incredible community! A massive 🙌 thank you 🙌 to all our longtime partners, new collaborators, and everyone who’s poured their passion into PaddleOCR — whether we’ve named you or not. Your support fuels our fire! 🔥
Project Name | Description |
---|---|
RAGFlow |
RAG engine based on deep document understanding. |
MinerU |
Multi-type Document to Markdown Conversion Tool |
Umi-OCR |
Free, Open-source, Batch Offline OCR Software. |
OmniParser |
OmniParser: Screen Parsing tool for Pure Vision Based GUI Agent. |
QAnything |
Question and Answer based on Anything. |
PDF-Extract-Kit |
A powerful open-source toolkit designed to efficiently extract high-quality content from complex and diverse PDF documents. |
Dango-Translator |
Recognize text on the screen, translate it and show the translation results in real time. |
Learn more projects | More projects based on PaddleOCR |
🔄 Quick Overview of Execution Results
👩👩👧👦 Community
- 👫 Join the PaddlePaddle Community, where you can engage with paddlepaddle developers, researchers, and enthusiasts from around the world.
- 🎓 Learn from experts through workshops, tutorials, and Q&A sessions hosted by the AI Studio.
- 🏆 Participate in hackathons, challenges, and competitions to showcase your skills and win exciting prizes.
- 📣 Stay updated with the latest news, announcements, and events by following our Twitter and WeChat. Let’s build the future of AI together! 🚀
📄 License
This project is released under Apache License Version 2.0.