Execute different operations of a NN model graph with different hardware accelerators while using Onnxruntime #10661
Unanswered
ashwinjosephk
asked this question in
Other Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I was trying to utilize NNAPI via OnnxRuntime for NN model inferencing on an Android device. Based on the youtube video here: https://www.youtube.com/watch?v=Ij5MoUnLQ0E it is possible to specify the hardware accelerators for operators in the model. Any guidance on how to proceed with that would be grateful. I am currently using C++ for carrying out inferencing with Onnxruntime.
Beta Was this translation helpful? Give feedback.
All reactions