Why is the preserved model different from the pyTorch model after the quant aware training #8791
Unanswered
yangyyt
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 2 comments
-
How can I save a qat model like pyTorch? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
quant aware model:
pythorch:
layer_0.scale
layer_0.zero_point
layer_0._packed_params.dtype
layer_0._packed_params._packed_params
layer_1.scale
layer_1.zero_point
layer_1._packed_params.dtype
layer_1._packed_params._packed_params
layer_end.scale
layer_end.zero_point
layer_end._packed_params.dtype
layer_end._packed_params._packed_params
quant.scale
quant.zero_point
but pytorch lightning:
layer_0.weight
layer_0.bias
layer_0.activation_post_process.fake_quant_enabled
layer_0.activation_post_process.observer_enabled
layer_0.activation_post_process.scale
layer_0.activation_post_process.zero_point
layer_0.activation_post_process.activation_post_process.min_val
layer_0.activation_post_process.activation_post_process.max_val
layer_0.weight_fake_quant.fake_quant_enabled
layer_0.weight_fake_quant.observer_enabled
layer_0.weight_fake_quant.scale
layer_0.weight_fake_quant.zero_point
layer_0.weight_fake_quant.activation_post_process.min_vals
layer_0.weight_fake_quant.activation_post_process.max_vals
layer_0a.activation_post_process.fake_quant_enabled
layer_0a.activation_post_process.observer_enabled
layer_0a.activation_post_process.scale
layer_0a.activation_post_process.zero_point
layer_0a.activation_post_process.activation_post_process.min_val
layer_0a.activation_post_process.activation_post_process.max_val
layer_1.weight
layer_1.bias
layer_1.activation_post_process.fake_quant_enabled
layer_1.activation_post_process.observer_enabled
layer_1.activation_post_process.scale
layer_1.activation_post_process.zero_point
layer_1.activation_post_process.activation_post_process.min_val
layer_1.activation_post_process.activation_post_process.max_val
layer_1.weight_fake_quant.fake_quant_enabled
layer_1.weight_fake_quant.observer_enabled
layer_1.weight_fake_quant.scale
layer_1.weight_fake_quant.zero_point
layer_1.weight_fake_quant.activation_post_process.min_vals
layer_1.weight_fake_quant.activation_post_process.max_vals
layer_1a.activation_post_process.fake_quant_enabled
layer_1a.activation_post_process.observer_enabled
layer_1a.activation_post_process.scale
layer_1a.activation_post_process.zero_point
layer_1a.activation_post_process.activation_post_process.min_val
layer_1a.activation_post_process.activation_post_process.max_val
layer_end.weight
layer_end.bias
layer_end.activation_post_process.fake_quant_enabled
layer_end.activation_post_process.observer_enabled
layer_end.activation_post_process.scale
layer_end.activation_post_process.zero_point
layer_end.activation_post_process.activation_post_process.min_val
layer_end.activation_post_process.activation_post_process.max_val
layer_end.weight_fake_quant.fake_quant_enabled
layer_end.weight_fake_quant.observer_enabled
layer_end.weight_fake_quant.scale
layer_end.weight_fake_quant.zero_point
layer_end.weight_fake_quant.activation_post_process.min_vals
layer_end.weight_fake_quant.activation_post_process.max_vals
quant.activation_post_process.fake_quant_enabled
quant.activation_post_process.observer_enabled
quant.activation_post_process.scale
quant.activation_post_process.zero_point
quant.activation_post_process.activation_post_process.min_val
quant.activation_post_process.activation_post_process.max_val
Beta Was this translation helpful? Give feedback.
All reactions