OCR is a technology that converts text from images into editable text. It is widely used in fields such as document digitization, information extraction, and data processing. OCR can recognize printed text, handwritten text, and even certain types of fonts and symbols.
The general OCR pipeline is used to solve text recognition tasks by extracting text information from images and outputting it in text form. This pipeline supports the use of PP-OCRv3, PP-OCRv4, and PP-OCRv5 models, with the default model being the PP-OCRv5_mobile model released by PaddleOCR 3.0, which improves by 13 percentage points over PP-OCRv4_mobile in various scenarios.
<b>The General OCR Pipeline consists of the following 5 modules. Each module can be independently trained and inferred, and includes multiple models. For detailed information, click the corresponding module to view its documentation.</b>
<tdrowspan="2">PP-OCRv5_rec is a next-generation text recognition model. It aims to efficiently and accurately support the recognition of four major languages—Simplified Chinese, Traditional Chinese, English, and Japanese—as well as complex text scenarios such as handwriting, vertical text, pinyin, and rare characters using a single model. While maintaining recognition performance, it balances inference speed and model robustness, providing efficient and accurate technical support for document understanding in various scenarios.</td>
<td>PP-OCRv4_server_rec_doc is trained on a mixed dataset of more Chinese document data and PP-OCR training data, building upon PP-OCRv4_server_rec. It enhances the recognition capabilities for some Traditional Chinese characters, Japanese characters, and special symbols, supporting over 15,000 characters. In addition to improving document-related text recognition, it also enhances general text recognition capabilities.</td>
<td>A lightweight recognition model of PP-OCRv4 with high inference efficiency, suitable for deployment on various hardware devices, including edge devices.</td>
<td>An ultra-lightweight English recognition model trained based on the PP-OCRv4 recognition model, supporting English and numeric character recognition.</td>
> ❗ The above section lists the **6 core models** that are primarily supported by the text recognition module. In total, the module supports **20 comprehensive models**, including multiple multilingual text recognition models. Below is the complete list of models:
<tdrowspan="2">PP-OCRv5_rec is a next-generation text recognition model. It aims to efficiently and accurately support the recognition of four major languages—Simplified Chinese, Traditional Chinese, English, and Japanese—as well as complex text scenarios such as handwriting, vertical text, pinyin, and rare characters using a single model. While maintaining recognition performance, it balances inference speed and model robustness, providing efficient and accurate technical support for document understanding in various scenarios.</td>
SVTRv2, developed by FVL's OpenOCR team, won first prize in the PaddleOCR Algorithm Challenge, improving end-to-end recognition accuracy by 6% over PP-OCRv4.
</td>
</tr>
</table>
<table>
<tr>
<th>Model</th><th>Download Links</th>
<th>Accuracy(%)</th>
<th>GPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<th>CPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<tdrowspan="1">RepSVTR, a mobile-optimized version of SVTRv2, won first prize in the PaddleOCR Challenge, improving accuracy by 2.5% over PP-OCRv4 with comparable speed.</td>
</tr>
</table>
*<b>English Recognition Models</b>
<table>
<tr>
<th>Model</th><th>Download Links</th>
<th>Accuracy(%)</th>
<th>GPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<th>CPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<li>Text Detection Model: PaddleOCR in-house Chinese dataset covering street views, web images, documents, and handwriting, with 500 images for detection.</li>
<li>Chinese Recognition Model: PaddleOCR in-house Chinese dataset covering street views, web images, documents, and handwriting, with 11,000 images for recognition.</li>
<b>If you prioritize model accuracy, choose models with higher accuracy; if inference speed is critical, select faster models; if model size matters, opt for smaller models.</b>
## 2. Quick Start
Before using the general OCR pipeline locally, ensure you have installed the wheel package by following the [Installation Guide](../installation.en.md). Once installed, you can experience OCR via the command line or Python integration.
### 2.1 Command Line
Run a single command to quickly test the OCR pipeline:
<td>Whether to enable document orientation classification. If <code>None</code>, defaults to pipeline initialization value (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_doc_unwarping</code></td>
<td>Whether to enable text image correction. If <code>None</code>, defaults to pipeline initialization value (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_textline_orientation</code></td>
<td>Whether to enable text line orientation classification. If <code>None</code>, defaults to pipeline initialization value (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_side_len</code></td>
<td>Maximum side length limit for text detection.
<ul>
<li><b>int</b>: Any integer > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>960</code>).</li>
</ul>
</td>
<td><code>int</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_type</code></td>
<td>Side length limit type for text detection.
<ul>
<li><b>str</b>: Supports <code>min</code> (ensures shortest side ≥ <code>det_limit_side_len</code>) or <code>max</code> (ensures longest side ≤ <code>limit_side_len</code>);</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>max</code>).</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_thresh</code></td>
<td>Pixel threshold for text detection. Pixels with scores > this threshold are considered text.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>0.3</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_box_thresh</code></td>
<td>Box threshold for text detection. Detected regions with average scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>0.6</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_unclip_ratio</code></td>
<td>Expansion ratio for text detection. Larger values expand text regions more.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>2.0</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_input_shape</code></td>
<td>Input shape for text detection.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_score_thresh</code></td>
<td>Score threshold for text recognition. Results with scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>0.0</code>, no threshold).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_input_shape</code></td>
<td>Input shape for text recognition.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>lang</code></td>
<td>Specifies the OCR model language.
<ul>
<li><b>ch</b>: Chinese;</li>
<li><b>en</b>: English;</li>
<li><b>korean</b>: Korean;</li>
<li><b>japan</b>: Japanese;</li>
<li><b>chinese_cht</b>: Traditional Chinese;</li>
<li><b>te</b>: Telugu;</li>
<li><b>ka</b>: Kannada;</li>
<li><b>ta</b>: Tamil;</li>
<li><b>None</b>: If <code>None</code>, defaults to <code>ch</code>.</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>ocr_version</code></td>
<td>OCR model version.
<ul>
<li><b>PP-OCRv5</b>: Uses PP-OCRv5 models;</li>
<li><b>PP-OCRv4</b>: Uses PP-OCRv4 models;</li>
<li><b>PP-OCRv3</b>: Uses PP-OCRv3 models;</li>
<li><b>None</b>: If <code>None</code>, defaults to PP-OCRv5 models.</li>
<td>Whether to enable document orientation classification. If <code>None</code>, defaults to pipeline initialization (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_doc_unwarping</code></td>
<td>Whether to enable text image correction. If <code>None</code>, defaults to pipeline initialization (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_textline_orientation</code></td>
<td>Whether to enable text line orientation classification. If <code>None</code>, defaults to pipeline initialization (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_side_len</code></td>
<td>Maximum side length limit for text detection.
<ul>
<li><b>int</b>: Any integer > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>960</code>).</li>
</ul>
</td>
<td><code>int</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_type</code></td>
<td>Side length limit type for text detection.
<ul>
<li><b>str</b>: Supports <code>min</code> (ensures shortest side ≥ <code>det_limit_side_len</code>) or <code>max</code> (ensures longest side ≤ <code>limit_side_len</code>);</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>max</code>).</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_thresh</code></td>
<td>Pixel threshold for text detection. Pixels with scores > this threshold are considered text.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>0.3</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_box_thresh</code></td>
<td>Box threshold for text detection. Detected regions with average scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>0.6</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_unclip_ratio</code></td>
<td>Expansion ratio for text detection. Larger values expand text regions more.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>2.0</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_input_shape</code></td>
<td>Input shape for text detection.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_score_thresh</code></td>
<td>Score threshold for text recognition. Results with scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>0.0</code>, no threshold).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_input_shape</code></td>
<td>Input shape for text recognition.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>lang</code></td>
<td>Specifies the OCR model language.
<ul>
<li><b>ch</b>: Chinese;</li>
<li><b>en</b>: English;</li>
<li><b>korean</b>: Korean;</li>
<li><b>japan</b>: Japanese;</li>
<li><b>chinese_cht</b>: Traditional Chinese;</li>
<li><b>te</b>: Telugu;</li>
<li><b>ka</b>: Kannada;</li>
<li><b>ta</b>: Tamil;</li>
<li><b>None</b>: If <code>None</code>, defaults to <code>ch</code>.</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>ocr_version</code></td>
<td>OCR model version.
<ul>
<li><b>PP-OCRv5</b>: Uses PP-OCRv5 models;</li>
<li><b>PP-OCRv4</b>: Uses PP-OCRv4 models;</li>
<li><b>PP-OCRv3</b>: Uses PP-OCRv3 models;</li>
<li><b>None</b>: If <code>None</code>, defaults to PP-OCRv5 models.</li>
<details><summary>(2) Call the <code>predict()</code> method for inference. Alternatively, <code>predict_iter()</code> returns a generator for memory-efficient batch processing. Parameters:</summary>
<td>Whether to enable document orientation classification during inference.</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_doc_unwarping</code></td>
<td>Whether to enable text image correction during inference.</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<td><code>use_textline_orientation</code></td>
<td>Whether to enable text line orientation classification during inference.</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_limit_side_len</code></td>
<td>Same as initialization.</td>
<td><code>int</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_limit_type</code></td>
<td>Same as initialization.</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_thresh</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_box_thresh</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_unclip_ratio</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_rec_score_thresh</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</table>
</details>
<details><summary>(3) Processing prediction results: Each sample's prediction result is a corresponding Result object, supporting printing, saving as images, and saving as <code>json</code> files:</summary>
<table>
<thead>
<tr>
<th>Method</th>
<th>Description</th>
<th>Parameter</th>
<th>Type</th>
<th>Explanation</th>
<th>Default</th>
</tr>
</thead>
<tr>
<tdrowspan="3"><code>print()</code></td>
<tdrowspan="3">Print results to terminal</td>
<td><code>format_json</code></td>
<td><code>bool</code></td>
<td>Whether to format output with <code>JSON</code> indentation</td>
<td><code>True</code></td>
</tr>
<tr>
<td><code>indent</code></td>
<td><code>int</code></td>
<td>Indentation level for prettifying <code>JSON</code> output (only when <code>format_json=True</code>)</td>
<td>4</td>
</tr>
<tr>
<td><code>ensure_ascii</code></td>
<td><code>bool</code></td>
<td>Whether to escape non-<code>ASCII</code> characters to <code>Unicode</code> (only when <code>format_json=True</code>)</td>
<td><code>False</code></td>
</tr>
<tr>
<tdrowspan="3"><code>save_to_json()</code></td>
<tdrowspan="3">Save results as JSON file</td>
<td><code>save_path</code></td>
<td><code>str</code></td>
<td>Output file path (uses input filename when directory specified)</td>
<td>None</td>
</tr>
<tr>
<td><code>indent</code></td>
<td><code>int</code></td>
<td>Indentation level for prettifying <code>JSON</code> output (only when <code>format_json=True</code>)</td>
<td>4</td>
</tr>
<tr>
<td><code>ensure_ascii</code></td>
<td><code>bool</code></td>
<td>Whether to escape non-<code>ASCII</code> characters (only when <code>format_json=True</code>)</td>
<td><code>False</code></td>
</tr>
<tr>
<td><code>save_to_img()</code></td>
<td>Save results as image file</td>
<td><code>save_path</code></td>
<td><code>str</code></td>
<td>Output path (supports directory or file path)</td>
<td>None</td>
</tr>
</table>
- The <code>print()</code> method outputs results to terminal with the following structure:
-<code>unclip_ratio</code>: <code>(float)</code> Text region expansion ratio
-<code>text_type</code>: <code>(str)</code> Fixed as "general"
-<code>textline_orientation_angles</code>: <code>(List[int])</code> Text line orientation predictions (actual angles when enabled, [-1,-1,-1] when disabled)
-<code>text_rec_score_thresh</code>: <code>(float)</code> Text recognition score threshold
-<code>rec_texts</code>: <code>(List[str])</code> Recognized texts (filtered by <code>text_rec_score_thresh</code>)
- Directory: saves as <code>save_path/{your_img_basename}_ocr_res_img.{your_img_extension}</code>
- File: saves directly (not recommended for multiple images to avoid overwriting)
* Additionally, results with visualizations and predictions can be obtained through the following attributes:
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Description</th>
</tr>
</thead>
<tr>
<tdrowspan="1"><code>json</code></td>
<tdrowspan="1">Retrieves prediction results in <code>json</code> format</td>
</tr>
<tr>
<tdrowspan="2"><code>img</code></td>
<tdrowspan="2">Retrieves visualized images in <code>dict</code> format</td>
</tr>
</table>
- The `json` attribute returns prediction results as a dict, with content identical to what's saved by the `save_to_json()` method.
- The `img` attribute returns prediction results as a dictionary containing two `Image.Image` objects under keys `ocr_res_img` (OCR result visualization) and `preprocessed_img` (preprocessing visualization). If the image preprocessing submodule isn't used, only `ocr_res_img` will be present.
</details>
## 3. Development Integration/Deployment
If the general OCR pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
If you need to apply the general OCR pipeline directly in your Python project, you can refer to the sample code in [2.2 Python Script Integration](#22-python-script-intergration).
Additionally, PaddleOCR provides two other deployment methods, detailed as follows:
🚀 **High-Performance Inference**: In real-world production environments, many applications have stringent performance requirements (especially for response speed) to ensure system efficiency and smooth user experience. To address this, PaddleOCR offers high-performance inference capabilities, which deeply optimize model inference and pre/post-processing to achieve significant end-to-end speed improvements. For detailed high-performance inference workflows, refer to the [High-Performance Inference Guide](../deployment/high_performance_inference.en.md).
☁️ **Service Deployment**: Service deployment is a common form of deployment in production environments. By encapsulating inference functionality as a service, clients can access these services via network requests to obtain inference results. For detailed pipeline service deployment workflows, refer to the [Service Deployment Guide](../deployment/serving.en.md).
Below are the API reference for basic service deployment and examples of multi-language service calls:
<details><summary>API Reference</summary>
<p>For the main operations provided by the service:</p>
<ul>
<li>The HTTP request method is POST.</li>
<li>Both the request body and response body are JSON data (JSON objects).</li>
<li>When the request is processed successfully, the response status code is <code>200</code>, and the response body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>logId</code></td>
<td><code>string</code></td>
<td>UUID of the request.</td>
</tr>
<tr>
<td><code>errorCode</code></td>
<td><code>integer</code></td>
<td>Error code. Fixed as <code>0</code>.</td>
</tr>
<tr>
<td><code>errorMsg</code></td>
<td><code>string</code></td>
<td>Error message. Fixed as <code>"Success"</code>.</td>
</tr>
<tr>
<td><code>result</code></td>
<td><code>object</code></td>
<td>Operation result.</td>
</tr>
</tbody>
</table>
<ul>
<li>When the request fails, the response body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>logId</code></td>
<td><code>string</code></td>
<td>UUID of the request.</td>
</tr>
<tr>
<td><code>errorCode</code></td>
<td><code>integer</code></td>
<td>Error code. Same as the response status code.</td>
</tr>
<tr>
<td><code>errorMsg</code></td>
<td><code>string</code></td>
<td>Error message.</td>
</tr>
</tbody>
</table>
<p>The main operations provided by the service are as follows:</p>
<ul>
<li><b><code>infer</code></b></li>
</ul>
<p>Obtain OCR results for an image.</p>
<p><code>POST /ocr</code></p>
<ul>
<li>The request body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>file</code></td>
<td><code>string</code></td>
<td>A server-accessible URL to an image or PDF file, or the Base64-encoded content of such a file. By default, for PDF files with more than 10 pages, only the first 10 pages are processed.<br/> To remove the page limit, add the following configuration to the pipeline config file:
<pre><code>Serving:
extra:
max_num_input_imgs: null
</code></pre>
</td>
<td>Yes</td>
</tr>
<tr>
<td><code>fileType</code></td>
<td><code>integer</code> | <code>null</code></td>
<td>File type. <code>0</code> for PDF, <code>1</code> for image. If omitted, the type is inferred from the URL.</td>
<td>No</td>
</tr>
<tr>
<td><code>useDocOrientationClassify</code></td>
<td><code>boolean</code> | <code>null</code></td>
<td>Refer to the <code>use_doc_orientation_classify</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>useDocUnwarping</code></td>
<td><code>boolean</code> | <code>null</code></td>
<td>Refer to the <code>use_doc_unwarping</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<tr>
<td><code>useTextlineOrientation</code></td>
<td><code>boolean</code> | <code>null</code></td>
<td>Refer to the <code>use_textline_orientation</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetLimitSideLen</code></td>
<td><code>integer</code> | <code>null</code></td>
<td>Refer to the <code>text_det_limit_side_len</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetLimitType</code></td>
<td><code>string</code> | <code>null</code></td>
<td>Refer to the <code>text_det_limit_type</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetThresh</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_det_thresh</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetBoxThresh</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_det_box_thresh</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetUnclipRatio</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_det_unclip_ratio</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textRecScoreThresh</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_rec_score_thresh</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
</tbody>
</table>
<ul>
<li>When the request is successful, the <code>result</code> in the response body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>ocrResults</code></td>
<td><code>object</code></td>
<td>OCR results. The array length is 1 (for image input) or the number of processed document pages (for PDF input). For PDF input, each element represents the result for a corresponding page.</td>
</tr>
<tr>
<td><code>dataInfo</code></td>
<td><code>object</code></td>
<td>Input data information.</td>
</tr>
</tbody>
</table>
<p>Each element in <code>ocrResults</code> is an <code>object</code> with the following attributes:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>prunedResult</code></td>
<td><code>object</code></td>
<td>A simplified version of the <code>res</code> field in the JSON output of the pipeline object's <code>predict</code> method, excluding <code>input_path</code> and <code>page_index</code>.</td>
</tr>
<tr>
<td><code>ocrImage</code></td>
<td><code>string</code> | <code>null</code></td>
<td>OCR result image with detected text regions highlighted. JPEG format, Base64-encoded.</td>
</tr>
<tr>
<td><code>docPreprocessingImage</code></td>
<td><code>string</code> | <code>null</code></td>
<td>Visualization of preprocessing results. JPEG format, Base64-encoded.</td>
</tr>
<tr>
<td><code>inputImage</code></td>
<td><code>string</code> | <code>null</code></td>
<td>Input image. JPEG format, Base64-encoded.</td>
</tr>
</tbody>
</table>
</details>
<details><summary>Multi-Language Service Call Examples</summary>
If the default model weights provided by the General OCR Pipeline do not meet your expectations in terms of accuracy or speed for your specific scenario, you can leverage your own domain-specific or application-specific data to further fine-tune the existing models, thereby improving the recognition performance of the General OCR Pipeline in your use case.
The general OCR pipeline consists of multiple modules. If the pipeline's performance does not meet expectations, the issue may stem from any of these modules. You can analyze poorly recognized images to identify the problematic module and refer to the corresponding fine-tuning tutorials in the table below for adjustments.
After fine-tuning the model with your private dataset, you will obtain local model weight files. You can then use these fine-tuned weights by customizing the pipeline configuration file.
1.**Obtain the Pipeline Configuration File**
Call the `export_paddlex_config_to_yaml` method of the **General OCR Pipeline** object in PaddleOCR to export the current pipeline configuration as a YAML file:
After obtaining the default pipeline configuration file, replace the paths of the default model weights with the local paths of your fine-tuned model weights. For example:
```yaml
......
SubModules:
TextDetection:
box_thresh: 0.6
limit_side_len: 960
limit_type: max
max_side_limit: 4000
model_dir: null # Replace with the path to your fine-tuned text detection model weights
model_name: PP-OCRv5_server_det
module_name: text_detection
thresh: 0.3
unclip_ratio: 1.5
TextLineOrientation:
batch_size: 6
model_dir: null
model_name: PP-LCNet_x0_25_textline_ori
module_name: textline_orientation
TextRecognition:
batch_size: 6
model_dir: null # Replace with the path to your fine-tuned text recognition model weights
model_name: PP-OCRv5_server_rec
module_name: text_recognition
score_thresh: 0.0
......
```
The pipeline configuration file includes not only the parameters supported by the PaddleOCR CLI and Python API but also advanced configurations. For detailed instructions, refer to the [PaddleX Pipeline Usage Overview](https://paddlepaddle.github.io/PaddleX/3.0/pipeline_usage/pipeline_develop_guide.html) and adjust the configurations as needed.
3.**Load the Configuration File in CLI**
After modifying the configuration file, specify its path using the `--paddlex_config` parameter in the command line. PaddleOCR will read the file and apply the configurations. Example:
```bash
paddleocr ocr --paddlex_config PaddleOCR.yaml ...
```
4.**Load the Configuration File in Python API**
When initializing the pipeline object, pass the path of the PaddleX pipeline configuration file or a configuration dictionary via the `paddlex_config` parameter. PaddleOCR will read and apply the configurations. Example: