Skip to content

Commit 78ea7c7

Browse files
committed
The following changes have been made:
・Text revisions ・File name corrections ・Parameter setting modifications
1 parent ce0694f commit 78ea7c7

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

tutorials/notebooks/task_notebooks/pytorch/example_posenet_pytorch_post_training_quantization.ipynb renamed to tutorials/notebooks/task_notebooks/pytorch/example_posenet_pytorch_mixed_precision_ptq.ipynb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@
99
}
1010
},
1111
"source": [
12-
"# PoseNet Post-Training Quantization in PyTorch using the Model Compression Toolkit(MCT)\n",
12+
"# PoseNet and Mixed-Precision Post-Training Post-Training Quantization in PyTorch using the Model Compression Toolkit(MCT)\n",
1313
"\n",
1414
"## Overview\n",
15-
"This quick-start guide explains how to use the **Model Compression Toolkit (MCT)** to quantize a PoseNet model. We will load a pre-trained model and quantize it using the MCT with **Post-Training Quantization (PTQ)**. \n",
15+
"This quick-start guide explains how to use the **Model Compression Toolkit (MCT)** to quantize a PoseNet model. We will load a pre-trained model and quantize it using the MCT with **Mixed-Precision Post-Training Quantization (PTQ)** .\n",
1616
"\n",
1717
"## Summary\n",
1818
"In this tutorial, we will cover:\n",
@@ -181,7 +181,7 @@
181181
"- CALIB_ITER \n",
182182
" This parameter allows you to set how many samples to use when generating representative data for quantization.\n",
183183
"- WEIGHTS_COMPRESSION_RATIO \n",
184-
" This parameter allows you to set the quantized ratio from the size of the 8-bit model's weights."
184+
" This parameter allows you to set the quantization ratio based on the weight size of the 8-bit model when using mixed-precision quantization."
185185
]
186186
},
187187
{
@@ -296,7 +296,7 @@
296296
"id": "b64c60b0",
297297
"metadata": {},
298298
"source": [
299-
"In this class, we process the downloaded COCO's dataset for evaluation during quantization and for use in calibration."
299+
"In this class, we process the downloaded COCO's dataset for calibration during quantization and for use in evaluation."
300300
]
301301
},
302302
{
@@ -485,13 +485,13 @@
485485
},
486486
{
487487
"cell_type": "code",
488-
"execution_count": 153,
488+
"execution_count": null,
489489
"id": "f25783c9",
490490
"metadata": {},
491491
"outputs": [],
492492
"source": [
493493
"configuration = mct.core.CoreConfig(\n",
494-
" mixed_precision_config=mct.core.MixedPrecisionQuantizationConfig(num_of_images=32))"
494+
" mixed_precision_config=mct.core.MixedPrecisionQuantizationConfig(num_of_images=CALIB_ITER))"
495495
]
496496
},
497497
{

0 commit comments

Comments
 (0)