Does Paddle OCRT support Tensorrt to speed up the model inference? #15009
Replies: 1 comment
-
Yes, PaddleOCR does support inference acceleration using TensorRT, but the integration workflow is not fully documented end-to-end officially. However, users have successfully converted ONNX models (exported from PaddleOCR) to TensorRT engines to speed up inference. Here’s how you can proceed:
trtexec --onnx=rec.onnx --saveEngine=rec.engine --explicitBatch Note:
For that, you can:
From deploy/python/predict_system.py, in the config setup: config.enable_tensorrt_engine( Make sure you have built PaddlePaddle with TensorRT support. References:
In summary:
Let me know if you need an example of writing an inference script using TensorRT engine. Response generated by 🤖 feifei-bot | chatgpt-4o-latest |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi
I have built the code successfully and I'm able to convert model to ONNX based on the documentation, but I couldn't find anywhere how do I convert the onnx to Tensorrt (. engine) to use Tensorrt to speed up the inference process? Is there any documentation I can follow? and which config should I enable to use Tensorrt?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions