OpenCV OpenVINO backend runs model far faster than onnxruntime with OpenVINOExecutionProvider #9775
Unanswered
jbrownkramer
asked this question in
Other Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a https://github.com/Megvii-BaseDetection/YOLOX model that I have converted to onnx with their onnx export tool. I have used the OpenVINO model optimizer to convert this to an an OpenVINO intermediate representation. Using the opencv front end for loading the intermediate representation, I get speeds on an Intel Integrated GPU of 7.5fps on 512x512 images. Using onnxruntime with the OpenVINOExecutionProvider directly on the onnx file (which was a bear to build on Windows), I get 5.5fps.
I have made sure that I am running them both at fp16 on the integrated GPU. I have tried converting the onnx to 16 bits. I have tried https://convertmodel.com/ to optimize the onnx model. I have tried all of the different settings for graph_optimization_level. None of that helped. Any idea what's going on?
Beta Was this translation helpful? Give feedback.
All reactions