diff --git a/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md b/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md index f9db163528..7eef3d0561 100644 --- a/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md +++ b/docs/version3.x/pipeline_usage/PaddleOCR-VL.en.md @@ -89,16 +89,20 @@ Currently, PaddleOCR-VL offers four inference methods, with varying levels of su -> [!TIP] -> 1. When using NVIDIA GPU for inference, ensure that the Compute Capability (CC) and CUDA version meet the requirements: -> > - PaddlePaddle: CC ≥ 7.0, CUDA ≥ 11.8 -> > - vLLM: CC ≥ 8.0, CUDA ≥ 12.6 -> > - SGLang: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6 -> > - FastDeploy: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6 -> > - Common GPUs with CC ≥ 8 include RTX 30/40/50 series and A10/A100, etc. For more models, refer to [CUDA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus) -> 2. vLLM compatibility note: Although vLLM can be launched on NVIDIA GPUs with CC 7.x such as T4/V100, timeout or OOM issues may occur, and its use is not recommended. -> 3. Currently, PaddleOCR-VL does not support ARM architecture CPUs. More hardware support will be expanded based on actual needs in the future, so stay tuned! -> 4. vLLM, SGLang, and FastDeploy cannot run natively on Windows or macOS. Please use the Docker images we provide. +TIP: + 1. When using NVIDIA GPU for inference, ensure that the Compute Capability (CC) and CUDA version meet the requirements: + + - PaddlePaddle: CC ≥ 7.0, CUDA ≥ 11.8 + - vLLM: CC ≥ 8.0, CUDA ≥ 12.6 + - SGLang: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6 + - FastDeploy: 8.0 ≤ CC < 12.0, CUDA ≥ 12.6 + - Common GPUs with CC ≥ 8 include RTX 30/40/50 series and A10/A100, etc. For more models, refer to [CUDA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus) + + 2. vLLM compatibility note: Although vLLM can be launched on NVIDIA GPUs with CC 7.x such as T4/V100, timeout or OOM issues may occur, and its use is not recommended. + + 3. Currently, PaddleOCR-VL does not support ARM architecture CPUs. More hardware support will be expanded based on actual needs in the future, so stay tuned! + + 4. vLLM, SGLang, and FastDeploy cannot run natively on Windows or macOS. Please use the Docker images we provide. Since different hardware requires different dependencies, if your hardware meets the requirements in the table above, please refer to the following table for the corresponding tutorial to configure your environment: diff --git a/mkdocs.yml b/mkdocs.yml index 397b2055a8..765dea42ca 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -228,7 +228,7 @@ plugins: markdown_extensions: - abbr - attr_list - - github-callouts + - callouts - pymdownx.snippets - pymdownx.critic - pymdownx.caret