OCR (Optical Character Recognition) is a technology that converts text in images into editable text. It is widely used in document digitization, information extraction, and data processing. OCR can recognize printed text, handwritten text, and even certain types of fonts and symbols.
The General OCR Pipeline is designed to solve text recognition tasks by extracting text information from images and outputting it in text format. This pipeline integrates the industry-renowned PP-OCRv3 and PP-OCRv4 end-to-end OCR systems, supporting recognition for over 80 languages. Additionally, it includes functionalities for image orientation correction and distortion correction. Based on this pipeline, millisecond-level accurate text prediction can be achieved on CPUs, covering various scenarios such as general, manufacturing, finance, and transportation. The pipeline also offers flexible service-oriented deployment options, supporting calls in multiple programming languages across various hardware platforms. Furthermore, it provides secondary development capabilities, allowing you to fine-tune models on your own datasets, with trained models seamlessly integrable.
<b>The General OCR Pipeline consists of the following 5 modules. Each module can be independently trained and inferred, and includes multiple models. For detailed information, click the corresponding module to view its documentation.</b>
<td>PP-OCRv5_server_rec is a next-generation text recognition model designed to efficiently and accurately support Simplified Chinese, Traditional Chinese, English, and Japanese, as well as complex scenarios like handwriting, vertical text, pinyin, and rare characters. It balances recognition performance with inference speed and robustness, providing reliable support for document understanding across diverse scenarios.</td>
<td>PP-OCRv5_mobile_rec is a next-generation lightweight text recognition model optimized for efficiency and accuracy across Simplified Chinese, Traditional Chinese, English, and Japanese, including complex scenarios like handwriting and vertical text. It delivers robust performance while maintaining fast inference speeds.</td>
<td>PP-OCRv4_server_rec_doc is trained on a hybrid dataset of Chinese document data and PP-OCR training data, enhancing recognition for Traditional Chinese, Japanese, and special characters. It supports 15,000+ characters and improves both document-specific and general text recognition.</td>
<td>An ultra-lightweight English recognition model based on PP-OCRv4, supporting English and numeric characters.</td>
</tr>
</table>
> ❗ The above table highlights <b>6 core models</b> from the text recognition module, which includes <b>10 full models</b> in total, covering multiple multilingual recognition models. For the complete list:
<details><summary> 👉 Full Model Details</summary>
*<b>PP-OCRv5 Multi-Scene Models</b>
<table>
<tr>
<th>Model</th><th>Download Links</th>
<th>Chinese Accuracy(%)</th>
<th>English Accuracy(%)</th>
<th>Traditional Chinese Accuracy(%)</th>
<th>Japanese Accuracy(%)</th>
<th>GPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<th>CPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<td>PP-OCRv5_server_rec is a next-generation text recognition model supporting Simplified Chinese, Traditional Chinese, English, and Japanese, including complex scenarios like handwriting and vertical text.</td>
SVTRv2, developed by FVL's OpenOCR team, won first prize in the PaddleOCR Algorithm Challenge, improving end-to-end recognition accuracy by 6% over PP-OCRv4.
</td>
</tr>
</table>
<table>
<tr>
<th>Model</th><th>Download Links</th>
<th>Accuracy(%)</th>
<th>GPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<th>CPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<tdrowspan="1">RepSVTR, a mobile-optimized version of SVTRv2, won first prize in the PaddleOCR Challenge, improving accuracy by 2.5% over PP-OCRv4 with comparable speed.</td>
</tr>
</table>
*<b>English Recognition Models</b>
<table>
<tr>
<th>Model</th><th>Download Links</th>
<th>Accuracy(%)</th>
<th>GPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<th>CPU Inference Time (ms)<br/>[Standard / High-Performance]</th>
<li>Text Detection Model: PaddleOCR in-house Chinese dataset covering street views, web images, documents, and handwriting, with 500 images for detection.</li>
<li>Chinese Recognition Model: PaddleOCR in-house Chinese dataset covering street views, web images, documents, and handwriting, with 11,000 images for recognition.</li>
<b>If you prioritize model accuracy, choose models with higher accuracy; if inference speed is critical, select faster models; if model size matters, opt for smaller models.</b>
## 2. Quick Start
Before using the general OCR pipeline locally, ensure you have installed the wheel package by following the [Installation Guide](../installation.en.md). Once installed, you can experience OCR via the command line or Python integration.
### 2.1 Command Line
Run a single command to quickly test the OCR pipeline:
<td>Whether to enable document orientation classification. If <code>None</code>, defaults to pipeline initialization value (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_doc_unwarping</code></td>
<td>Whether to enable text image correction. If <code>None</code>, defaults to pipeline initialization value (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_textline_orientation</code></td>
<td>Whether to enable text line orientation classification. If <code>None</code>, defaults to pipeline initialization value (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_side_len</code></td>
<td>Maximum side length limit for text detection.
<ul>
<li><b>int</b>: Any integer > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>960</code>).</li>
</ul>
</td>
<td><code>int</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_type</code></td>
<td>Side length limit type for text detection.
<ul>
<li><b>str</b>: Supports <code>min</code> (ensures shortest side ≥ <code>det_limit_side_len</code>) or <code>max</code> (ensures longest side ≤ <code>limit_side_len</code>);</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>max</code>).</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_thresh</code></td>
<td>Pixel threshold for text detection. Pixels with scores > this threshold are considered text.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>0.3</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_box_thresh</code></td>
<td>Box threshold for text detection. Detected regions with average scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>0.6</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_unclip_ratio</code></td>
<td>Expansion ratio for text detection. Larger values expand text regions more.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>2.0</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_input_shape</code></td>
<td>Input shape for text detection.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_score_thresh</code></td>
<td>Score threshold for text recognition. Results with scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization value (<code>0.0</code>, no threshold).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_input_shape</code></td>
<td>Input shape for text recognition.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>lang</code></td>
<td>Specifies the OCR model language.
<ul>
<li><b>ch</b>: Chinese;</li>
<li><b>en</b>: English;</li>
<li><b>korean</b>: Korean;</li>
<li><b>japan</b>: Japanese;</li>
<li><b>chinese_cht</b>: Traditional Chinese;</li>
<li><b>te</b>: Telugu;</li>
<li><b>ka</b>: Kannada;</li>
<li><b>ta</b>: Tamil;</li>
<li><b>None</b>: If <code>None</code>, defaults to <code>ch</code>.</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>ocr_version</code></td>
<td>OCR model version.
<ul>
<li><b>PP-OCRv5</b>: Uses PP-OCRv5 models;</li>
<li><b>PP-OCRv4</b>: Uses PP-OCRv4 models;</li>
<li><b>PP-OCRv3</b>: Uses PP-OCRv3 models;</li>
<li><b>None</b>: If <code>None</code>, defaults to PP-OCRv5 models.</li>
<td>Whether to enable document orientation classification. If <code>None</code>, defaults to pipeline initialization (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_doc_unwarping</code></td>
<td>Whether to enable text image correction. If <code>None</code>, defaults to pipeline initialization (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_textline_orientation</code></td>
<td>Whether to enable text line orientation classification. If <code>None</code>, defaults to pipeline initialization (<code>True</code>).</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_side_len</code></td>
<td>Maximum side length limit for text detection.
<ul>
<li><b>int</b>: Any integer > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>960</code>).</li>
</ul>
</td>
<td><code>int</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_limit_type</code></td>
<td>Side length limit type for text detection.
<ul>
<li><b>str</b>: Supports <code>min</code> (ensures shortest side ≥ <code>det_limit_side_len</code>) or <code>max</code> (ensures longest side ≤ <code>limit_side_len</code>);</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>max</code>).</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_thresh</code></td>
<td>Pixel threshold for text detection. Pixels with scores > this threshold are considered text.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>0.3</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_box_thresh</code></td>
<td>Box threshold for text detection. Detected regions with average scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>0.6</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_unclip_ratio</code></td>
<td>Expansion ratio for text detection. Larger values expand text regions more.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>2.0</code>).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_det_input_shape</code></td>
<td>Input shape for text detection.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_score_thresh</code></td>
<td>Score threshold for text recognition. Results with scores > this threshold are retained.
<ul>
<li><b>float</b>: Any float > <code>0</code>;</li>
<li><b>None</b>: If <code>None</code>, defaults to pipeline initialization (<code>0.0</code>, no threshold).</li>
</ul>
</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>text_rec_input_shape</code></td>
<td>Input shape for text recognition.</td>
<td><code>tuple</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>lang</code></td>
<td>Specifies the OCR model language.
<ul>
<li><b>ch</b>: Chinese;</li>
<li><b>en</b>: English;</li>
<li><b>korean</b>: Korean;</li>
<li><b>japan</b>: Japanese;</li>
<li><b>chinese_cht</b>: Traditional Chinese;</li>
<li><b>te</b>: Telugu;</li>
<li><b>ka</b>: Kannada;</li>
<li><b>ta</b>: Tamil;</li>
<li><b>None</b>: If <code>None</code>, defaults to <code>ch</code>.</li>
</ul>
</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>ocr_version</code></td>
<td>OCR model version.
<ul>
<li><b>PP-OCRv5</b>: Uses PP-OCRv5 models;</li>
<li><b>PP-OCRv4</b>: Uses PP-OCRv4 models;</li>
<li><b>PP-OCRv3</b>: Uses PP-OCRv3 models;</li>
<li><b>None</b>: If <code>None</code>, defaults to PP-OCRv5 models.</li>
<td>Whether to enable MKL-DNN acceleration. If <code>None</code>, enabled by default.</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>cpu_threads</code></td>
<td>Number of CPU threads for inference.</td>
<td><code>int</code></td>
<td><code>8</code></td>
</tr>
</tbody>
</table>
</details>
<details><summary>(2) Call the <code>predict()</code> method for inference. Alternatively, <code>predict_iter()</code> returns a generator for memory-efficient batch processing. Parameters:</summary>
<td>Whether to enable document orientation classification during inference.</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<tr>
<td><code>use_doc_unwarping</code></td>
<td>Whether to enable text image correction during inference.</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<td><code>use_textline_orientation</code></td>
<td>Whether to enable text line orientation classification during inference.</td>
<td><code>bool</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_limit_side_len</code></td>
<td>Same as initialization.</td>
<td><code>int</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_limit_type</code></td>
<td>Same as initialization.</td>
<td><code>str</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_thresh</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_box_thresh</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_det_unclip_ratio</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</tr>
<td><code>text_rec_score_thresh</code></td>
<td>Same as initialization.</td>
<td><code>float</code></td>
<td><code>None</code></td>
</table>
</details>
<details><summary>(3) Processing prediction results: Each sample's prediction result is a corresponding Result object, supporting printing, saving as images, and saving as <code>json</code> files:</summary>
<table>
<thead>
<tr>
<th>Method</th>
<th>Description</th>
<th>Parameter</th>
<th>Type</th>
<th>Explanation</th>
<th>Default</th>
</tr>
</thead>
<tr>
<tdrowspan="3"><code>print()</code></td>
<tdrowspan="3">Print results to terminal</td>
<td><code>format_json</code></td>
<td><code>bool</code></td>
<td>Whether to format output with <code>JSON</code> indentation</td>
<td><code>True</code></td>
</tr>
<tr>
<td><code>indent</code></td>
<td><code>int</code></td>
<td>Indentation level for prettifying <code>JSON</code> output (only when <code>format_json=True</code>)</td>
<td>4</td>
</tr>
<tr>
<td><code>ensure_ascii</code></td>
<td><code>bool</code></td>
<td>Whether to escape non-<code>ASCII</code> characters to <code>Unicode</code> (only when <code>format_json=True</code>)</td>
<td><code>False</code></td>
</tr>
<tr>
<tdrowspan="3"><code>save_to_json()</code></td>
<tdrowspan="3">Save results as JSON file</td>
<td><code>save_path</code></td>
<td><code>str</code></td>
<td>Output file path (uses input filename when directory specified)</td>
<td>None</td>
</tr>
<tr>
<td><code>indent</code></td>
<td><code>int</code></td>
<td>Indentation level for prettifying <code>JSON</code> output (only when <code>format_json=True</code>)</td>
<td>4</td>
</tr>
<tr>
<td><code>ensure_ascii</code></td>
<td><code>bool</code></td>
<td>Whether to escape non-<code>ASCII</code> characters (only when <code>format_json=True</code>)</td>
<td><code>False</code></td>
</tr>
<tr>
<td><code>save_to_img()</code></td>
<td>Save results as image file</td>
<td><code>save_path</code></td>
<td><code>str</code></td>
<td>Output path (supports directory or file path)</td>
<td>None</td>
</tr>
</table>
- The <code>print()</code> method outputs results to terminal with the following structure:
-<code>unclip_ratio</code>: <code>(float)</code> Text region expansion ratio
-<code>text_type</code>: <code>(str)</code> Fixed as "general"
-<code>textline_orientation_angles</code>: <code>(List[int])</code> Text line orientation predictions (actual angles when enabled, [-1,-1,-1] when disabled)
-<code>text_rec_score_thresh</code>: <code>(float)</code> Text recognition score threshold
-<code>rec_texts</code>: <code>(List[str])</code> Recognized texts (filtered by <code>text_rec_score_thresh</code>)
- Directory: saves as <code>save_path/{your_img_basename}_ocr_res_img.{your_img_extension}</code>
- File: saves directly (not recommended for multiple images to avoid overwriting)
* Additionally, results with visualizations and predictions can be obtained through the following attributes:
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Description</th>
</tr>
</thead>
<tr>
<tdrowspan="1"><code>json</code></td>
<tdrowspan="1">Retrieves prediction results in <code>json</code> format</td>
</tr>
<tr>
<tdrowspan="2"><code>img</code></td>
<tdrowspan="2">Retrieves visualized images in <code>dict</code> format</td>
</tr>
</table>
- The `json` attribute returns prediction results as a dict, with content identical to what's saved by the `save_to_json()` method.
- The `img` attribute returns prediction results as a dictionary containing two `Image.Image` objects under keys `ocr_res_img` (OCR result visualization) and `preprocessed_img` (preprocessing visualization). If the image preprocessing submodule isn't used, only `ocr_res_img` will be present.
</details>
## 3. Development Integration/Deployment
If the general OCR pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
If you need to apply the general OCR pipeline directly in your Python project, you can refer to the sample code in [2.2 Python Script Integration](#22-python-script-intergration).
Additionally, PaddleOCR provides two other deployment methods, detailed as follows:
🚀 **High-Performance Inference**: In real-world production environments, many applications have stringent performance requirements (especially for response speed) to ensure system efficiency and smooth user experience. To address this, PaddleOCR offers high-performance inference capabilities, which deeply optimize model inference and pre/post-processing to achieve significant end-to-end speed improvements. For detailed high-performance inference workflows, refer to the [High-Performance Inference Guide](../deployment/high_performance_inference.en.md).
☁️ **Service Deployment**: Service deployment is a common form of deployment in production environments. By encapsulating inference functionality as a service, clients can access these services via network requests to obtain inference results. For detailed pipeline service deployment workflows, refer to the [Service Deployment Guide](../deployment/serving.en.md).
Below are the API reference for basic service deployment and examples of multi-language service calls:
<details><summary>API Reference</summary>
<p>For the main operations provided by the service:</p>
<ul>
<li>The HTTP request method is POST.</li>
<li>Both the request body and response body are JSON data (JSON objects).</li>
<li>When the request is processed successfully, the response status code is <code>200</code>, and the response body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>logId</code></td>
<td><code>string</code></td>
<td>UUID of the request.</td>
</tr>
<tr>
<td><code>errorCode</code></td>
<td><code>integer</code></td>
<td>Error code. Fixed as <code>0</code>.</td>
</tr>
<tr>
<td><code>errorMsg</code></td>
<td><code>string</code></td>
<td>Error message. Fixed as <code>"Success"</code>.</td>
</tr>
<tr>
<td><code>result</code></td>
<td><code>object</code></td>
<td>Operation result.</td>
</tr>
</tbody>
</table>
<ul>
<li>When the request fails, the response body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>logId</code></td>
<td><code>string</code></td>
<td>UUID of the request.</td>
</tr>
<tr>
<td><code>errorCode</code></td>
<td><code>integer</code></td>
<td>Error code. Same as the response status code.</td>
</tr>
<tr>
<td><code>errorMsg</code></td>
<td><code>string</code></td>
<td>Error message.</td>
</tr>
</tbody>
</table>
<p>The main operations provided by the service are as follows:</p>
<ul>
<li><b><code>infer</code></b></li>
</ul>
<p>Obtain OCR results for an image.</p>
<p><code>POST /ocr</code></p>
<ul>
<li>The request body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
<th>Required</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>file</code></td>
<td><code>string</code></td>
<td>A server-accessible URL to an image or PDF file, or the Base64-encoded content of such a file. By default, for PDF files with more than 10 pages, only the first 10 pages are processed.<br/> To remove the page limit, add the following configuration to the pipeline config file:
<pre><code>Serving:
extra:
max_num_input_imgs: null
</code></pre>
</td>
<td>Yes</td>
</tr>
<tr>
<td><code>fileType</code></td>
<td><code>integer</code> | <code>null</code></td>
<td>File type. <code>0</code> for PDF, <code>1</code> for image. If omitted, the type is inferred from the URL.</td>
<td>No</td>
</tr>
<tr>
<td><code>useDocOrientationClassify</code></td>
<td><code>boolean</code> | <code>null</code></td>
<td>Refer to the <code>use_doc_orientation_classify</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>useDocUnwarping</code></td>
<td><code>boolean</code> | <code>null</code></td>
<td>Refer to the <code>use_doc_unwarping</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<tr>
<td><code>useTextlineOrientation</code></td>
<td><code>boolean</code> | <code>null</code></td>
<td>Refer to the <code>use_textline_orientation</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetLimitSideLen</code></td>
<td><code>integer</code> | <code>null</code></td>
<td>Refer to the <code>text_det_limit_side_len</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetLimitType</code></td>
<td><code>string</code> | <code>null</code></td>
<td>Refer to the <code>text_det_limit_type</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetThresh</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_det_thresh</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetBoxThresh</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_det_box_thresh</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textDetUnclipRatio</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_det_unclip_ratio</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
<tr>
<td><code>textRecScoreThresh</code></td>
<td><code>number</code> | <code>null</code></td>
<td>Refer to the <code>text_rec_score_thresh</code> parameter in the pipeline object's <code>predict</code> method.</td>
<td>No</td>
</tr>
</tbody>
</table>
<ul>
<li>When the request is successful, the <code>result</code> in the response body has the following attributes:</li>
</ul>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>ocrResults</code></td>
<td><code>object</code></td>
<td>OCR results. The array length is 1 (for image input) or the number of processed document pages (for PDF input). For PDF input, each element represents the result for a corresponding page.</td>
</tr>
<tr>
<td><code>dataInfo</code></td>
<td><code>object</code></td>
<td>Input data information.</td>
</tr>
</tbody>
</table>
<p>Each element in <code>ocrResults</code> is an <code>object</code> with the following attributes:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>prunedResult</code></td>
<td><code>object</code></td>
<td>A simplified version of the <code>res</code> field in the JSON output of the pipeline object's <code>predict</code> method, excluding <code>input_path</code> and <code>page_index</code>.</td>
</tr>
<tr>
<td><code>ocrImage</code></td>
<td><code>string</code> | <code>null</code></td>
<td>OCR result image with detected text regions highlighted. JPEG format, Base64-encoded.</td>
</tr>
<tr>
<td><code>docPreprocessingImage</code></td>
<td><code>string</code> | <code>null</code></td>
<td>Visualization of preprocessing results. JPEG format, Base64-encoded.</td>
</tr>
<tr>
<td><code>inputImage</code></td>
<td><code>string</code> | <code>null</code></td>
<td>Input image. JPEG format, Base64-encoded.</td>
</tr>
</tbody>
</table>
</details>
<details><summary>Multi-Language Service Call Examples</summary>
The general OCR pipeline consists of multiple modules. If the pipeline's performance does not meet expectations, the issue may stem from any of these modules. You can analyze poorly recognized images to identify the problematic module and refer to the corresponding fine-tuning tutorials in the table below for adjustments.