Skip to content

Commit e2d8126

Browse files
Update FAQ.md
1 parent 4933f9c commit e2d8126

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

FAQ.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,13 +64,17 @@ MCT supports both per-tensor and per-channel quantization, as [defined in TPC](h
6464

6565
In the object that configures the quantizer below:
6666
* model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.AttributeQuantizationConfig()
67+
6768
Set the following parameter:
6869
* weights_per_channel_threshold(bool) - Indicates whether to quantize the weights per-channel or per-tensor.
70+
6971
For more details, please refer to [this page](https://sonysemiconductorsolutions.github.io/mct-model-optimization/api/api_docs/modules/target_platform_capabilities.html#model_compression_toolkit.target_platform_capabilities.schema.mct_current_schema.AttributeQuantizationConfig.weights_per_channel_threshold).
7072

7173

7274
In QAT, the following object is used to set up a weight-learnable quantizer:
7375
* model_compression_toolkit.trainable_infrastructure.TrainableQuantizerWeightsConfig()
76+
7477
Set the following parameter:
7578
* weights_per_channel_threshold (bool) – Whether to quantize the weights per-channel or not (per-tensor).
79+
7680
For more details, please refer to [this page](https://sonysemiconductorsolutions.github.io/mct-model-optimization/api/api_docs/modules/trainable_infrastructure.html#trainablequantizerweightsconfig).

0 commit comments

Comments
 (0)