You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enable quantization as default for XNNPack for previous failing models (pytorch#6242)
Summary:
Pull Request resolved: pytorch#6242
Since master has migrated aot_compiler to use to_edge_transform_and_lower in a previous change pytorch#6026, quantization XNNPack options can be enabled by default for the following models:
- Quantized ViT
- Quantized Mobilebert
- Quantized Emformer Predict
- Quantized Emformer Transcribe
Reviewed By: digantdesai
Differential Revision: D64081319
fbshipit-source-id: 4e8ff77af442dfded043c5a5583466afec6beb4e
0 commit comments