Replies: 1 comment
-
针对你的问题,关于如何对PP-OCRv3的检测蒸馏(det distillation)模型在自定义数据集上进行微调,以下是详细的操作建议和注意事项。 一、两种蒸馏方式简介 在PaddleOCR中,PP-OCRv3的检测蒸馏主要支持两种方法:
你使用的是第一种 CML 蒸馏方式,配置文件是类似 ch_PP-OCRv3_det_cml.yml 的方式,结构是正确的。 二、是否需要训练teacher模型? 如果你希望进行蒸馏训练,可以选择:
你的teacher配置如下: Teacher: 意味着Teacher不会参与参数更新,因此你可以固定该模型。但前提是它的检测能力要比Student强,才能指导学生模型学习。 所以,总结为: ✅ 如果你已有符合任务需求的高精度Teacher(如在自定义数据集上训练好的),强烈建议使用它作为teacher。 🎯 否则,可以先用官方的teacher开始训练,然后后续再替换为自定义Teacher。 三、蒸馏训练流程与问题分析 你提到“蒸馏训练3个epoch就不更新best_model了”,可能原因如下:
四、如何finetune 你有两种途径: 1)继续做蒸馏训练(更推荐):
2)只微调学生模型(不再使用teacher和蒸馏结构):
五、总结建议 ✅ 建议按照以下步骤操作:
参考文档中也有提取student模型代码,如果你需要导出student模型做部署或单独训练,按以上示例操作即可。 希望这些信息可以帮你顺利完成PP-OCRv3检测蒸馏微调任务。若需要,我可以帮助你检查日志中是否有 loss 不收敛或者评估停滞等其他信息。 Response generated by 🤖 feifei-bot | chatgpt-4o-latest |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
我的配置文件目前如下:
Global:
debug: false
use_gpu: true
epoch_num: 500
log_smooth_window: 20
print_batch_step: 10
save_model_dir: ./output/ch_PP-OCR_v3_det/
save_epoch_step: 100
eval_batch_step:
cal_metric_during_train: false
pretrained_model: /data/boatplate/ch_PP-OCRv3/premodel/ch_PP-OCRv3_det_distill_train/student
checkpoints: null
save_inference_dir: null
use_visualdl: false
infer_img: doc/imgs_en/img_10.jpg
save_res_path: ./checkpoints/det_db/predicts_db.txt
distributed: true
d2s_train_image_shape: [3, -1, -1]
amp_dtype: bfloat16
Architecture:
name: DistillationModel
algorithm: Distillation
model_type: det
Models:
Student:
pretrained:
model_type: det
algorithm: DB
Transform: null
Backbone:
name: MobileNetV3
scale: 0.5
model_name: large
disable_se: true
Neck:
name: RSEFPN
out_channels: 96
shortcut: True
Head:
name: DBHead
k: 50
Student2:
pretrained:
model_type: det
algorithm: DB
Transform: null
Backbone:
name: MobileNetV3
scale: 0.5
model_name: large
disable_se: true
Neck:
name: RSEFPN
out_channels: 96
shortcut: True
Head:
name: DBHead
k: 50
Teacher:
freeze_params: true
return_all_feats: false
model_type: det
algorithm: DB
Backbone:
name: ResNet_vd
in_channels: 3
layers: 50
Neck:
name: LKPAN
out_channels: 256
Head:
name: DBHead
kernel_list: [7,2,2]
k: 50
Loss:
name: CombinedLoss
loss_config_list:
weight: 1.0
model_name_pairs:
key: maps
balance_loss: true
main_loss_type: DiceLoss
alpha: 5
beta: 10
ohem_ratio: 3
model_name_pairs:
maps_name: "thrink_maps"
weight: 1.0
model_name_pairs: ["Student", "Student2"]
key: maps
weight: 1.0
model_name_list: ["Student", "Student2"]
balance_loss: true
main_loss_type: DiceLoss
alpha: 5
beta: 10
ohem_ratio: 3
Optimizer:
name: Adam
beta1: 0.9
beta2: 0.999
lr:
name: Cosine
learning_rate: 0.001
warmup_epoch: 2
regularizer:
name: L2
factor: 5.0e-05
PostProcess:
name: DistillationDBPostProcess
model_name: ["Student"]
key: head_out
thresh: 0.3
box_thresh: 0.6
max_candidates: 1000
unclip_ratio: 1.5
Metric:
name: DistillationMetric
base_metric_name: DetMetric
main_indicator: hmean
key: "Student"
Train:
dataset:
name: SimpleDataSet
data_dir: ./train_data/icdar2015/text_localization/
label_file_list:
- ./train_data/icdar2015/text_localization/train_icdar2015_label.txt
ratio_list: [1.0]
transforms:
- DecodeImage:
img_mode: BGR
channel_first: false
- DetLabelEncode: null
- CopyPaste:
- IaaAugment:
augmenter_args:
- type: Fliplr
args:
p: 0.5
- type: Affine
args:
rotate:
- -10
- 10
- type: Resize
args:
size:
- 0.5
- 3
- EastRandomCropData:
size:
- 960
- 960
max_tries: 50
keep_ratio: true
- MakeBorderMap:
shrink_ratio: 0.4
thresh_min: 0.3
thresh_max: 0.7
- MakeShrinkMap:
shrink_ratio: 0.4
min_text_size: 8
- NormalizeImage:
scale: 1./255.
mean:
- 0.485
- 0.456
- 0.406
std:
- 0.229
- 0.224
- 0.225
order: hwc
- ToCHWImage: null
- KeepKeys:
keep_keys:
- image
- threshold_map
- threshold_mask
- shrink_map
- shrink_mask
loader:
shuffle: true
drop_last: false
batch_size_per_card: 8
num_workers: 4
Eval:
dataset:
name: SimpleDataSet
data_dir: ./train_data/icdar2015/text_localization/
label_file_list:
- ./train_data/icdar2015/text_localization/test_icdar2015_label.txt
transforms:
- DecodeImage: # load image
img_mode: BGR
channel_first: False
- DetLabelEncode: # Class handling label
- DetResizeForTest:
- NormalizeImage:
scale: 1./255.
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
order: 'hwc'
- ToCHWImage:
- KeepKeys:
keep_keys: ['image', 'shape', 'polys', 'ignore_tags']
loader:
shuffle: False
drop_last: False
batch_size_per_card: 1 # must be 1
num_workers: 2
蒸馏训练3个epoch,就不更新best_model了。
请问如果要进行蒸馏的话,应该如何操作呢?是应该在自定义数据集上面训练teacher模型,再把teacher模型放在cml配置文件里面当作预训练模型对学生模型进行蒸馏吗?还是直接在自定义数据集上面直接蒸馏微调,那是我操作哪里不对吗?
Beta Was this translation helpful? Give feedback.
All reactions