Replies: 1 comment
-
|
Hi @hcqylymzc, could you clarify a bit about what exactly your use-case is? If you mean you just want to replace the quantizers placed during # 1) Instantiate Quantsim with whatever args you want
sim = QuantizationSimModel(model, dummy_input, ...)
# 2) Replace whatever quantizers you want
sim.model.conv1.param_quantizers["weight"] = Q.affine.QuantizeDequantize(shape, bitwidth, symmetric=True)
sim.model.conv1.output_quantizers[0] = Q.affine.QuantizeDequantize((), bitwidth, symmetric=False)
# 3) Continue with regular workflow (e.g., compute_encodings and export) as normal
sim.compute_encodings(forward_pass_callback, forward_pass_callback_args)
sim.export(path, filename_prefix, dummy_input) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
hi
Can I manually declare a quantizer through aimet_torch.quantization, and then use this quantizer to customize where my network should be quantized?The parameters are the same. If so, can you give me an example?
For example, I use three Q.affine.QuantizeDequantize to define the quantization nodes of input, output, and convolution. I can insert quantization nodes in the entire model through a series of custom quantizers. But how do I cooperate with QuantizationSimModel after insertion?
Beta Was this translation helpful? Give feedback.
All reactions