--- comments: true --- # Text Detection Module Usage Guide ## 1. Overview The text detection module is a critical component of OCR (Optical Character Recognition) systems, responsible for locating and marking text-containing regions in images. The performance of this module directly impacts the accuracy and efficiency of the entire OCR system. The text detection module typically outputs bounding boxes for text regions, which are then passed to the text recognition module for further processing. ## 2. Supported Models List
Model | Model Download Link | Detection Hmean (%) | GPU Inference Time (ms) [Standard Mode / High-Performance Mode] |
CPU Inference Time (ms) [Standard Mode / High-Performance Mode] |
Model Size (MB) | Description |
---|---|---|---|---|---|---|
PP-OCRv5_server_det | Inference Model/Training Model | 83.8 | 89.55 / 70.19 | 383.15 / 383.15 | 84.3 | PP-OCRv5 server-side text detection model with higher accuracy, suitable for deployment on high-performance servers |
PP-OCRv5_mobile_det | Inference Model/Training Model | 79.0 | 10.67 / 6.36 | 57.77 / 28.15 | 4.7 | PP-OCRv5 mobile-side text detection model with higher efficiency, suitable for deployment on edge devices |
PP-OCRv4_server_det | Inference Model/Training Model | 69.2 | 127.82 / 98.87 | 585.95 / 489.77 | 109 | PP-OCRv4 server-side text detection model with higher accuracy, suitable for deployment on high-performance servers |
PP-OCRv4_mobile_det | Inference Model/Training Model | 63.8 | 9.87 / 4.17 | 56.60 / 20.79 | 4.7 | PP-OCRv4 mobile-side text detection model with higher efficiency, suitable for deployment on edge devices |
Mode | GPU Configuration | CPU Configuration | Acceleration Techniques |
---|---|---|---|
Standard Mode | FP32 precision / No TRT acceleration | FP32 precision / 8 threads | PaddleInference |
High-Performance Mode | Optimal combination of precision types and acceleration strategies | FP32 precision / 8 threads | Optimal backend selection (Paddle/OpenVINO/TRT, etc.) |
Parameter | Description | Type | Default |
---|---|---|---|
model_name |
Model name. If set to None , PP-OCRv5_server_det will be used. |
str|None |
None |
model_dir |
Model storage path. | str|None |
None |
device |
Device for inference. For example: "cpu" , "gpu" , "npu" , "gpu:0" , "gpu:0,1" .If multiple devices are specified, parallel inference will be performed. By default, GPU 0 is used if available; otherwise, CPU is used. |
str|None |
None |
enable_hpi |
Whether to enable high-performance inference. | bool |
False |
use_tensorrt |
Whether to use the Paddle Inference TensorRT subgraph engine. If the model does not support acceleration through TensorRT, setting this flag will not enable acceleration. For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6. For Paddle with CUDA version 12.6, the compatible TensorRT version is 10.x (x>=5), and it is recommended to install TensorRT 10.5.0.18. |
bool |
False |
precision |
Computation precision when using the Paddle Inference TensorRT subgraph engine. Options: "fp32" , "fp16" . |
str |
"fp32" |
enable_mkldnn |
Whether to enable MKL-DNN acceleration for inference. If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set. | bool |
True |
mkldnn_cache_capacity |
MKL-DNN cache capacity. | int |
10 |
cpu_threads |
Number of threads to use for inference on CPUs. | int |
10 |
limit_side_len |
Limit on the side length of the input image for detection. int specifies the value. If set to None , the model's default configuration will be used. |
int|None |
None |
limit_type |
Type of image side length limitation. "min" ensures the shortest side of the image is no less than det_limit_side_len ; "max" ensures the longest side is no greater than limit_side_len . If set to None , the model's default configuration will be used. |
str|None |
None |
max_side_limit |
Limit on the max length of the input image for detection. int limits the longest side of the image for input detection model. If set to None , the model's default configuration will be used. |
int|None |
None |
thresh |
Pixel score threshold. Pixels in the output probability map with scores greater than this threshold are considered text pixels. If set to None , the model's default configuration will be used. |
float|None |
None |
box_thresh |
If the average score of all pixels inside the bounding box is greater than this threshold, the result is considered a text region. If set to None , the model's default configuration will be used. |
float|None |
None |
unclip_ratio |
Expansion ratio for the Vatti clipping algorithm, used to expand the text region. If set to None , the model's default configuration will be used. |
float|None |
None |
input_shape |
Input image size for the model in the format (C, H, W) . |
tuple|None |
None |
Parameter | Description | Type | Default |
---|---|---|---|
input |
Input data to be predicted. Required. Supports multiple input types:
|
Python Var|str|list |
|
batch_size |
Batch size, positive integer. | int |
1 |
limit_side_len |
Same meaning as the instantiation parameters. If set to None , the instantiation value is used; otherwise, this parameter takes precedence. |
int|None |
None |
limit_type |
Same meaning as the instantiation parameters. If set to None , the instantiation value is used; otherwise, this parameter takes precedence. |
str|None |
None |
thresh |
Same meaning as the instantiation parameters. If set to None , the instantiation value is used; otherwise, this parameter takes precedence. |
float|None |
None |
box_thresh |
Same meaning as the instantiation parameters. If set to None , the instantiation value is used; otherwise, this parameter takes precedence. |
float|None |
None |
unclip_ratio |
Same meaning as the instantiation parameters. If set to None , the instantiation value is used; otherwise, this parameter takes precedence. |
float|None |
None |
Method | Description | Parameters | Type | Description | Default |
---|---|---|---|---|---|
print() |
Print results to terminal | format_json |
bool |
Format output as JSON | True |
indent |
int |
JSON indentation level | 4 | ||
ensure_ascii |
bool |
Escape non-ASCII characters | False |
||
save_to_json() |
Save results as JSON file | save_path |
str |
Output file path | Required |
indent |
int |
JSON indentation level | 4 | ||
ensure_ascii |
bool |
Escape non-ASCII characters | False |
||
save_to_img() |
Save results as image | save_path |
str |
Output file path | Required |
Attribute | Description |
---|---|
json |
Get prediction results in JSON format |
img |
Get visualization image as a dictionary |