You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Remove quantization functions from node quantization configs (#1477)
* remove activation_quantization_fn and activation_quantization_params_fn from NodeActivationQuantizationCfg
* remove weights_quantization_fn and weights_quantization_params_fn from WeightsAttrQuantizationConfig
Copy file name to clipboardExpand all lines: model_compression_toolkit/core/common/framework_info.py
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -52,19 +52,19 @@ class FrameworkInfo(ABC):
52
52
no_quantization_ops:Layers that should not get quantized (e.g., Reshape, Transpose, etc.)
53
53
54
54
Fields:
55
-
activation_quantizer_mapping (Dict[QuantizationMethod, Callable]): A dictionary mapping from QuantizationMethod to a quantization function.
56
55
kernel_channels_mapping (Dict): Dictionary from a layer to a tuple of its kernel in/out channels indices.
57
56
kernel_ops_attribute_mapping (Dict): Dictionary from a framework operator to its weight attribute to quantize.
58
57
out_channel_axis_mapping (Dict): Dictionary of output channels of the model's layers (for computing statistics per-channel).
59
58
_layer_min_max_mapping (Dict[Any, tuple]): Dictionary from a layer to its min/max output values.
59
+
activation_quantizer_factory_mapping: A mapping from QuantizationMethod to a factory function that accepts activation bitwidth and a dict of quantization params, and returns the corresponding quantization function.
0 commit comments