Skip to content

Commit 2d76c7f

Browse files
committed
align format
Signed-off-by: Kaihui-intel <kaihui.tang@intel.com>
1 parent ba1ebc7 commit 2d76c7f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/3x/PT_WeightOnlyQuant.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -178,8 +178,8 @@ model = convert(model, config) # after this step, the model is ready for W4A8 i
178178
| not_use_best_mse (bool) | Whether to use mean squared error | False |
179179
| dynamic_max_gap (int) | The dynamic maximum gap | -1 |
180180
| scale_dtype (str) | The data type of quantization scale to be used, different kernels have different choices | "float16" |
181-
| scheme (str) | A preset scheme that defines the quantization configurations. | "W4A16" |
182-
| layer_config (dict) | Layer-wise quantization config | None |
181+
| scheme (str) | A preset scheme that defines the quantization configurations. | "W4A16" |
182+
| layer_config (dict) | Layer-wise quantization config | None |
183183

184184
``` python
185185
# Quantization code

0 commit comments

Comments
 (0)