Replies: 1 comment
-
Let me know if I can do a PR regarding this. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Community,
I am currently understanding the quantization api of
ONNXRuntime
and got curious when I found no execution provider selection is given inquantize_static
whereasCalibratorBase
hasset_execution_provider
function present.onnxruntime/onnxruntime/python/tools/quantization/calibrate.py
Line 70 in b713855
Won't adding this support to
quantize_static
make quantization for an execution provider better as inferred output, through which calibration range is obtained, become more execution provider/hardware driven?I was thinking of something like
I am really curious why the
ONNXRuntime
contributors didn't add this as it seems very simple. Please let me know if I am missing something.Beta Was this translation helpful? Give feedback.
All reactions