File tree Expand file tree Collapse file tree 7 files changed +17
-11
lines changed
paddleslim/auto_compression Expand file tree Collapse file tree 7 files changed +17
-11
lines changed Original file line number Diff line number Diff line change @@ -54,9 +54,9 @@ python tools/export_model.py \
54
54
```
55
55
cd PaddleSlim/demo/auto-compression/
56
56
```
57
- 使用[ eval.py] ( ../quant_post/eval.py ) 脚本得到模型的分类精度:
57
+ 使用[ eval.py] ( ../quant/ quant_post/eval.py ) 脚本得到模型的分类精度:
58
58
```
59
- python ../quant_post/eval.py --model_path infermodel_mobilenetv2 --model_name inference.pdmodel --params_name inference.pdiparams
59
+ python ../quant/ quant_post/eval.py --model_path infermodel_mobilenetv2 --model_name inference.pdmodel --params_name inference.pdiparams
60
60
```
61
61
精度输出为:
62
62
```
Original file line number Diff line number Diff line change @@ -41,7 +41,7 @@ Distillation:
41
41
- teacher_linear_1.tmp_0
42
42
- linear_1.tmp_0
43
43
merge_feed : true
44
- teacher_model_dir : ./MobileNetV2_ssld_infer
44
+ teacher_model_dir : ./infermodel_mobilenetv2
45
45
teacher_model_filename : inference.pdmodel
46
46
teacher_params_filename : inference.pdiparams
47
47
Quantization :
Original file line number Diff line number Diff line change @@ -29,7 +29,7 @@ Distillation:
29
29
- teacher_linear_147.tmp_1
30
30
- linear_147.tmp_1
31
31
merge_feed : true
32
- teacher_model_dir : ../auto-compression_origin/ static_bert_models
32
+ teacher_model_dir : static_bert_models
33
33
teacher_model_filename : bert.pdmodel
34
34
teacher_params_filename : bert.pdiparams
35
35
Prune :
Original file line number Diff line number Diff line change 1
- python3.7 demo_glue.py --config_path ./configs/NLP/bert_qat_dis.yaml --task ' sst-2 ' \
2
- --model_dir=' ../auto-compression_origin /static_bert_models/' \
1
+ python3.7 demo_glue.py \
2
+ --model_dir=' ./static_bert_models/' \
3
3
--model_filename=' bert.pdmodel' \
4
4
--params_filename=' bert.pdiparams' \
5
5
--save_dir=' ./save_asp_bert/' \
6
- --devices=' gpu ' \
6
+ --devices=' cpu ' \
7
7
--batch_size=32 \
8
+ --task=' sst-2' \
9
+ --config_path=' ./configs/NLP/bert_asp_dis.yaml'
Original file line number Diff line number Diff line change 1
- python3.7 demo_imagenet.py --config_path ./configs/CV/mbv2_ptq_hpo.yaml \
2
- --model_dir=' ../auto-compression_origin/MobileNetV2_ssld_infer/ ' \
1
+ python3.7 demo_imagenet.py \
2
+ --model_dir=' infermodel_mobilenetv2 ' \
3
3
--model_filename=' inference.pdmodel' \
4
4
--params_filename=' ./inference.pdiparams' \
5
5
--save_dir=' ./save_qat_mbv2/' \
6
- --devices=' gpu' \
7
- --batch_size=64 \
6
+ --devices=' cpu' \
7
+ --batch_size=2 \
8
+ --config_path=' ./configs/CV/mbv2_ptq_hpo.yaml'
Original file line number Diff line number Diff line change 21
21
import paddle
22
22
sys .path [0 ] = os .path .join (
23
23
os .path .dirname ("__file__" ), os .path .pardir , os .path .pardir )
24
+ sys .path [1 ] = os .path .join (
25
+ os .path .dirname ("__file__" ), os .path .pardir )
24
26
import imagenet_reader as reader
25
27
from utility import add_arguments , print_arguments
26
28
Original file line number Diff line number Diff line change @@ -276,6 +276,7 @@ def compress(self):
276
276
277
277
### used to check whether the dataloader is right
278
278
if self .eval_function is not None and self .train_config .origin_metric is not None :
279
+ _logger .info ("start to test metric before compress" )
279
280
metric = self .eval_function (self ._exe , inference_program ,
280
281
feed_target_names , fetch_targets )
281
282
_logger .info ("metric of compressed model is: {}" .format (metric ))
You can’t perform that action at this time.
0 commit comments