NNAPI flag ignored #9177
-
I've been trying to benchmark NNAPI as compared to CPU and some other frameworks, but whenever I try to make it use NNAPI ( In the Java project I used Is there any way I can debug this (I've not found any ways to output which EP is actually used)? Or perhaps there's some known issue I'm not aware of? It seems like some combination of all these attempts should have worked... I have tested with:
Edit: I should also mention I've tried all NNAPI flag coombinations (enable fp16, disable cpu, nchw) which had no effect. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
There are some log messages that may be useful. You may need to set the log level to a more verbose level to see them. Statistics about how many nodes the NNAPI EP supports (INFO log level): Information about node EP assignment (VERBOSE log level): onnxruntime/onnxruntime/core/session/inference_session.cc Lines 908 to 920 in 430e80e |
Beta Was this translation helpful? Give feedback.
-
ResNet50 is support by our NNAPI execution provider, and can take advantage of the hardware accelerators in Samsung s20. |
Beta Was this translation helpful? Give feedback.
-
I am having similar behavior, my model's performance is the same regardless of the flags I use; input is fixed size and NCHW It could be good to have a general "tips & tricks" to optimize performance on Android/NNAPI |
Beta Was this translation helpful? Give feedback.
There are some log messages that may be useful. You may need to set the log level to a more verbose level to see them.
Statistics about how many nodes the NNAPI EP supports (INFO log level):
onnxruntime/onnxruntime/core/providers/nnapi/nnapi_builtin/nnapi_execution_provider.cc
Lines 200 to 212 in 430e80e