Is HPI (High Performance Inference) in Paddle 3.1.0 worth sticking with CUDA 11.6, and why is TensorRT missing from the official Docker image? #15969
-
Hello, In Paddle (or PaddleX) version 3.1.0, how much practical benefit does using the HPI (High Performance Inference) mode offer when working with PaddleOCR? Is the gain significant enough to justify sticking with CUDA 11.6 and giving up support for newer CUDA versions? Also, I noticed that the official Docker image for the latest 3.1.0 release doesn't include TensorRT. Is there a specific reason for that? |
Beta Was this translation helpful? Give feedback.
Answered by
Bobholamovic
Jul 17, 2025
Replies: 1 comment
-
Hi, please see my answer in this discussion: #15971 . |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
maakdan
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, please see my answer in this discussion: #15971 .