* fix tips

* update
This commit is contained in:
zhang-prog 2025-11-28 18:27:49 +08:00 committed by GitHub
parent 34effa44c2
commit a9ebda5b7f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 15 additions and 11 deletions

View File

@ -89,16 +89,20 @@ Currently, PaddleOCR-VL offers four inference methods, with varying levels of su
</tbody>
</table>
> [!TIP]
> 1. When using NVIDIA GPU for inference, ensure that the Compute Capability (CC) and CUDA version meet the requirements:
> > - PaddlePaddle: CC ≥ 7.0, CUDA ≥ 11.8
> > - vLLM: CC ≥ 8.0, CUDA ≥ 12.6
> > - SGLang: 8.0 ≤ CC < 12.0, CUDA 12.6
> > - FastDeploy: 8.0 ≤ CC < 12.0, CUDA 12.6
> > - Common GPUs with CC ≥ 8 include RTX 30/40/50 series and A10/A100, etc. For more models, refer to [CUDA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
> 2. vLLM compatibility note: Although vLLM can be launched on NVIDIA GPUs with CC 7.x such as T4/V100, timeout or OOM issues may occur, and its use is not recommended.
> 3. Currently, PaddleOCR-VL does not support ARM architecture CPUs. More hardware support will be expanded based on actual needs in the future, so stay tuned!
> 4. vLLM, SGLang, and FastDeploy cannot run natively on Windows or macOS. Please use the Docker images we provide.
TIP:
1. When using NVIDIA GPU for inference, ensure that the Compute Capability (CC) and CUDA version meet the requirements:
- PaddlePaddle: CC ≥ 7.0, CUDA ≥ 11.8
- vLLM: CC ≥ 8.0, CUDA ≥ 12.6
- SGLang: 8.0 ≤ CC < 12.0, CUDA 12.6
- FastDeploy: 8.0 ≤ CC < 12.0, CUDA 12.6
- Common GPUs with CC ≥ 8 include RTX 30/40/50 series and A10/A100, etc. For more models, refer to [CUDA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
2. vLLM compatibility note: Although vLLM can be launched on NVIDIA GPUs with CC 7.x such as T4/V100, timeout or OOM issues may occur, and its use is not recommended.
3. Currently, PaddleOCR-VL does not support ARM architecture CPUs. More hardware support will be expanded based on actual needs in the future, so stay tuned!
4. vLLM, SGLang, and FastDeploy cannot run natively on Windows or macOS. Please use the Docker images we provide.
Since different hardware requires different dependencies, if your hardware meets the requirements in the table above, please refer to the following table for the corresponding tutorial to configure your environment:

View File

@ -228,7 +228,7 @@ plugins:
markdown_extensions:
- abbr
- attr_list
- github-callouts
- callouts
- pymdownx.snippets
- pymdownx.critic
- pymdownx.caret