How to use it in python after TRT Build? #13905
Unanswered
davodogster
asked this question in
Other Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi I just did the TRT build using
./build.sh --cudnn_home /usr/local/cuda/ --cuda_home /usr/local/cuda-11.6 --use_tensorrt --tensorrt_home /home/scion.local/davidsos/Documents/onnxruntime/TensorRT-8.4.1.5
It passed most of the tests but failed the QOrdered tests.
But how does python recognize it when I run import onnxruntime?
I have a Pytorch lightning model I want to convert to ONNX and then to TRT for Onnxruntime-GPU TRT, CUDA Execution accelerated inference large scale (1M+ images). Is there a tutorial to convert a pytorch lightning model to ONNX-TRT then use it for TRT inference?
Thanks, Sam @abudup
Beta Was this translation helpful? Give feedback.
All reactions