- [2.3 Run Prediction Demo](#23-run-prediction-demo)
- [2.4 C++ API Integration](#24-c-api-integration)
- [3. Extended Features](#3-extended-features)
- [3.1 Multilingual Text Recognition](#31-multilingual-text-recognition)
- [3.2 Visualize Text Recognition Results](#32-visualize-text-recognition-results)
## 1. Environment Preparation
- **The source code used in this compilation and runtime section is located in the [PaddleOCR/deploy/cpp_infer](https://github.com/PaddlePaddle/PaddleOCR/tree/main/deploy/cpp_infer) directory.**
- Windows Environment:
- Visual Studio 2022
- CMake 3.29
### 1.1 Compile OpenCV Library
You can choose to directly download a pre-compiled package or manually compile the source code.
Download the .exe pre-compiled package for Windows from the [OpenCV Official Website](https://opencv.org/releases/). Running it will automatically extract the pre-compiled OpenCV library and related folders.
Taking OpenCV 4.7.0 as an example, download [opencv-4.7.0-windows.exe](https://github.com/opencv/opencv/releases/download/4.7.0/opencv-4.7.0-windows.exe). After running it, a `opencv/` subfolder will be generated in the current folder, where `opencv/build` contains the pre-compiled library. This will be used as the path for the OpenCV installation library when compiling the universal OCR pipeline prediction demo later.
#### 1.1.2 Compile from Source Code
First, download the OpenCV source code. Taking OpenCV 4.7.0 as an example, download the [opencv 4.7.0](https://paddle-model-ecology.bj.bcebos.com/paddlex/cpp/libs/opencv-4.7.0.tgz) source code. After extracting it, an `opencv-4.7.0/` folder will be generated in the current folder.
- Step 1: Build Visual Studio Project
Specify the `opencv-4.7.0` source code path in cmake-gui, and set the compilation output directory to `opencv-4.7.0/build`. The default installation path is `opencv-4.7.0/build/install`. This installation path will be used for subsequent demo compilation.
The [Paddle Inference Official Website](https://www.paddlepaddle.org.cn/inference/v3.0/guides/install/download_lib.html#windows) provides Windows prediction libraries. You can view and select the appropriate pre-compiled package on the official website.
After downloading and extracting it, a `paddle_inference/` subfolder will be generated in the current folder. The directory structure is as follows:
```
paddle_inference
├── paddle # Paddle core library and header files
├── third_party # Third-party dependency libraries and header files
└── version.txt # Version and compilation information
```
#### 1.2.2 Compile Prediction Library from Source Code
You can choose to compile the prediction library from source code. Compiling from source allows flexible configuration of various features and dependencies to adapt to different hardware and software environments. For detailed steps, please refer to [Compiling from Source under Windows](https://www.paddlepaddle.org.cn/inference/v3.0/guides/install/compile/source_compile_under_Windows.html).
## 2. Getting Started
### 2.1 Compile Prediction Demo
Before compiling the prediction demo, please ensure that you have compiled the OpenCV library and Paddle Inference prediction library according to Sections 1.1 and 1.2.
The compilation steps are as follows:
- Step 1: Build Visual Studio Project
Specify the `deploy\cpp_infer` source code path in cmake-gui, and set the compilation output directory to `deploy\cpp_infer\build`. The following compilation steps will use `D:\PaddleOCR\deploy\cpp_infer` as an example source code path. It is normal to encounter an error during the first Configure click. In the subsequent compilation options that pop up, add the installation path for OpenCV and the Paddle Inference prediction library path.
<imgsrc="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/paddleocr/deployment/cpp/windows_step1.png"/>- Step 2: Select the target platform
Select the target platform as x64 and click Finish.
1. Change the build configuration from `Debug` to `Release`.
2. Download [dirent.h](https://paddleocr.bj.bcebos.com/deploy/cpp_infer/cpp_files/dirent.h) and copy it to the Visual Studio include folder (e.g., `C:\Program Files (x86)\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include`).
Refer to the [General OCR Pipeline C++ Deployment - Linux → 2.2 Prepare the Model](./OCR.en.md#22-prepare-models) section.
### 2.3 Run the Prediction Demo
Refer to the [General OCR Pipeline C++ Deployment - Linux → 2.3 Run the Prediction Demo](./OCR.en.md#23-run-the-prediction-demo) section.
### 2.4 C++ API Integration
Refer to the [General OCR Pipeline C++ Deployment - Linux → 2.4 C++ API Integration](./OCR.en.md#24-c-api-integration) section.
## 3. Extended Features
### 3.1 Multilingual Text Recognition
Refer to the [General OCR Pipeline C++ Deployment - Linux → 3.1 Multilingual Text Recognition](./OCR.en.md#31-multilingual-text-recognition) section.
### 3.2 Visualize Text Recognition Results
To visualize text recognition results, you need to compile OpenCV with the FreeType module from the `opencv_contrib` repository (version 4.x). Ensure the OpenCV and `opencv_contrib` versions match. Below is an example using `opencv-4.7.0` and `opencv_contrib-4.7.0`:
After completing the above steps, click Configure again in the CMake interface. After ensuring there are no errors, click Generate, and then click Open Project to open Visual Studio. Switch from Debug to Release, right-click on ALL_BUILD and select Build. After the compilation is completed, right-click on INSTALL and select Build.
Note: If you have compiled OpenCV with FreeType included, when compiling the demo for the General OCR Pipeline in Section 2.1 Step 3, you need to check the `USE_FREETYPE` option to enable text rendering functionality. Additionally, when running the demo, you need to provide the path to the corresponding TTF font file using the `--vis_font_dir your_ttf_path` parameter.
After compiling and running the prediction demo, you can obtain the following visualized text recognition results: