Why OpenVino EP faster than CPU EP? #13389
Unanswered
PhilippShemetov
asked this question in
Other Q&A
Replies: 1 comment
-
You can read more about the OpenVINO EP here: https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html If using ORT on Intel devices with OpenVINO, it's expected that the OpenVINO EP would be optimized for the hardware and perform better than the default CPU provider. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I have a question. Why is OpenVino EP (CPU F32) faster than ONNXRuntime CPU EP without GRAPH_OPTIMIZATION? What is going on inside OpenVino? Because I only see tensor transformations and fusions in the OpenVino code and I can't find the part of the code that is responsible for optimization.
Beta Was this translation helpful? Give feedback.
All reactions