You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kernel_ops_attribute_mapping (Dict): Dictionary from a framework operator to its weight attribute to quantize.
57
53
out_channel_axis_mapping (Dict): Dictionary of output channels of the model's layers (for computing statistics per-channel).
58
54
_layer_min_max_mapping (Dict[Any, tuple]): Dictionary from a layer to its min/max output values.
59
-
activation_quantizer_factory_mapping: A mapping from QuantizationMethod to a factory function that accepts activation bitwidth and a dict of quantization params, and returns the corresponding quantization function.
Copy file name to clipboardExpand all lines: model_compression_toolkit/core/common/statistics_correction/compute_activation_bias_correction_of_graph.py
zero_padding_node: ZeroPadding2D node that may be in the graph before the linear layer.
282
282
params_search_quantization_fn: Function to quantize np tensor using a framework (tf/torch) quantization method. Needed for better param_search estimating the expected loss.
params_search_quantization_fn: Function to quantize np tensor using a framework (tf/torch) quantization method. Needed for better param_search estimating the expected loss.
596
+
593
597
Returns:
594
598
Graph after applying shift negative on selected activations.
0 commit comments