Replies: 1 comment
-
A similar issue on PyTorch. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I try to post-quantize the Unet:
backend = "qnnpack" if arm else 'x86' qconfig = get_default_qconfig(backend) qconfig_dict = {"": qconfig} prepare_costume_config = PrepareCustomConfig() prepare_costume_config.set_non_traceable_module_names(["ConvTranspose2d"]) prepared_model = prepare_fx(model, qconfig_dict, prepare_custom_config=prepare_costume_config, example_inputs=example_input) quantized_model = convert_fx(prepared_model)
But I got the following error:
AssertionError: Per channel weight observer is not supported yet for ConvTranspose{n}d.
But with
prepare_costume_config.set_non_traceable_module_names(["ConvTranspose2d"])
I tried not to quantize this layer. Does anyone know how I can still exclude it?Beta Was this translation helpful? Give feedback.
All reactions