Skip to content

Commit 637e165

Browse files
authored
add ACT mobilenetv1 demo (PaddlePaddle#1109)
1 parent 79a7b16 commit 637e165

File tree

4 files changed

+149
-1
lines changed

4 files changed

+149
-1
lines changed
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# 图像分类模型自动压缩示例
2+
3+
目录:
4+
- [1. 简介](#1简介)
5+
- [2. Benchmark](#2Benchmark)
6+
- [3. 自动压缩流程](#自动压缩流程)
7+
- [3.1 准备环境](#31-准备准备)
8+
- [3.2 准备数据集](#32-准备数据集)
9+
- [3.3 准备预测模型](#33-准备预测模型)
10+
- [3.4 自动压缩并产出模型](#34-自动压缩并产出模型)
11+
- [4. 预测部署](#4预测部署)
12+
- [5. FAQ](5FAQ)
13+
14+
15+
## 1. 简介
16+
本示例将以图像分类模型MobileNetV1为例,介绍如何使用PaddleClas中Inference部署模型进行自动压缩。本示例使用的自动压缩策略为量化训练和蒸馏。
17+
18+
## 2. Benchmark
19+
- MobileNetV1模型
20+
21+
| 模型 | 策略 | Top-1 Acc | 耗时(ms) threads=4 |
22+
|:------:|:------:|:------:|:------:|
23+
| MobileNetV1 | Base模型 | 70.90 | 39.041 |
24+
| MobileNetV1 | 量化+蒸馏 | 70.49 | 29.238|
25+
26+
- 测试环境:`SDM710 2*A75(2.2GHz) 6*A55(1.7GHz)`
27+
28+
## 3. 自动压缩流程
29+
30+
#### 3.1 准备环境
31+
32+
- python >= 3.6
33+
- PaddlePaddle >= 2.2 (可从[Paddle官网](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/linux-pip.html)下载安装)
34+
- PaddleSlim >= 2.3 或者适当develop版本
35+
36+
安装paddlepaddle:
37+
```shell
38+
# CPU
39+
pip install paddlepaddle
40+
# GPU
41+
pip install paddlepaddle-gpu
42+
```
43+
44+
安装paddleslim:
45+
```shell
46+
pip install paddleslim
47+
```
48+
49+
#### 3.2 准备数据集
50+
本案例默认以ImageNet1k数据进行自动压缩实验,如数据集为非ImageNet1k格式数据, 请参考[PaddleClas数据准备文档](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/data_preparation/classification_dataset.md)
51+
52+
53+
#### 3.3 准备预测模型
54+
预测模型的格式为:`model.pdmodel``model.pdiparams`两个,带`pdmodel`的是模型文件,带`pdiparams`后缀的是权重文件。
55+
56+
注:其他像`__model__``__params__`分别对应`model.pdmodel``model.pdiparams`文件。
57+
58+
可在[PaddleClas预训练模型库](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/algorithm_introduction/ImageNet_models.md)中直接获取Inference模型,具体可参考下方获取MobileNetV1模型示例:
59+
60+
```shell
61+
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/MobileNetV1_infer.tar
62+
tar -zxvf MobileNetV1_infer.tar
63+
```
64+
也可根据[PaddleClas文档](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/zh_CN/inference_deployment/export_model.md)导出Inference模型。
65+
66+
#### 3.4 自动压缩并产出模型
67+
68+
蒸馏量化自动压缩示例通过run.py脚本启动,会使用接口```paddleslim.auto_compression.AutoCompression```对模型进行量化训练和蒸馏。配置config文件中模型路径、数据集路径、蒸馏、量化和训练等部分的参数,配置完成后便可开始自动压缩。
69+
70+
```shell
71+
# 单卡启动
72+
python run.py \
73+
--model_dir='MobileNetV1_infer' \
74+
--model_filename='inference.pdmodel' \
75+
--params_filename='inference.pdiparams' \
76+
--save_dir='./save_quant_mobilev1/' \
77+
--batch_size=128 \
78+
--config_path='./configs/mobilev1.yaml'\
79+
--data_dir='ILSVRC2012'
80+
81+
# 多卡启动
82+
python -m paddle.distributed.launch run.py \
83+
--model_dir='MobileNetV1_infer' \
84+
--model_filename='inference.pdmodel' \
85+
--params_filename='inference.pdiparams' \
86+
--save_dir='./save_quant_mobilev1/' \
87+
--batch_size=128 \
88+
--config_path='./configs/mobilev1.yaml'\
89+
--data_dir='ILSVRC2012'
90+
```
91+
92+
93+
## 4.预测部署
94+
95+
- [Paddle Inference Python部署](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.5/docs/deployment/inference/python_inference.md)
96+
- [Paddle Inference C++部署](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.5/docs/deployment/inference/cpp_inference.md)
97+
- [Paddle Lite部署](https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.5/docs/deployment/lite/lite.md)
98+
99+
## 5.FAQ
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
Distillation:
2+
distill_lambda: 1.0
3+
distill_loss: l2_loss
4+
distill_node_pair:
5+
- teacher_softmax_0.tmp_0
6+
- softmax_0.tmp_0
7+
merge_feed: true
8+
teacher_model_dir: MobileNetV1_infer
9+
teacher_model_filename: inference.pdmodel
10+
teacher_params_filename: inference.pdiparams
11+
Quantization:
12+
activation_bits: 8
13+
is_full_quantize: false
14+
activation_quantize_type: range_abs_max
15+
weight_quantize_type: abs_max
16+
not_quant_pattern:
17+
- skip_quant
18+
quantize_op_types:
19+
- conv2d
20+
- depthwise_conv2d
21+
weight_bits: 8
22+
TrainConfig:
23+
epochs: 1
24+
eval_iter: 500
25+
learning_rate: 0.004
26+
optimizer: Momentum
27+
optim_args:
28+
weight_decay: 0.00003
29+
origin_metric: 0.70898

demo/auto_compression/image_classification/run.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
import os
22
import sys
3-
sys.path[0] = os.path.join(os.path.dirname("__file__"), os.path.pardir)
3+
sys.path[0] = os.path.join(os.path.dirname("__file__"), os.path.pardir, os.path.pardir)
44
import argparse
55
import functools
66
from functools import partial
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# 单卡启动
2+
python run.py \
3+
--model_dir='MobileNetV1_infer' \
4+
--model_filename='inference.pdmodel' \
5+
--params_filename='inference.pdiparams' \
6+
--save_dir='./save_quant_mobilev1/' \
7+
--batch_size=128 \
8+
--config_path='./configs/mobilev1.yaml'\
9+
--data_dir='ILSVRC2012'
10+
11+
# 多卡启动
12+
# python -m paddle.distributed.launch run.py \
13+
# --model_dir='MobileNetV1_infer' \
14+
# --model_filename='inference.pdmodel' \
15+
# --params_filename='inference.pdiparams' \
16+
# --save_dir='./save_quant_mobilev1/' \
17+
# --batch_size=128 \
18+
# --config_path='./configs/mobilev1.yaml'\
19+
# --data_dir='/workspace/dataset/ILSVRC2012/'
20+

0 commit comments

Comments
 (0)