You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix the onnx checker to use model path when model size > 2gib (#502)
## What does this PR do?
**Type of change:** Bug Fix
**Overview:** There was a failure during quantization for fp32 whisper
large model.
`Error:
ValueError: This protobuf of onnx model is too large (>2GiB). Call
check_model with model path instead.`
Essentially the bug was that while using the checker, we can only give
model object as input if model size < 2gb, otherwise the path to model
needs to be given as input. So, I changed the input given to the checker
according to size of model in trt_utils.py.
## Testing
Tried quantizing after the fix. The quantization is working after
applying the fix with no errors.
Signed-off-by: Hrishith Thadicherla <[email protected]>
0 commit comments