MMPose Model Quantization #2424
Unanswered
gusmcarreira
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there,
I am trying to speed up inference speed of my mmpose model, for such im using PyTorch's quantisation. The code is as follows:
But when I try to then perform inference with that model, the results are way different. Any help?
Kind regards
Beta Was this translation helpful? Give feedback.
All reactions