怎么使用IR格式存储的模型
#15150
Replies: 2 comments
-
是不是若需使用 IR 模式,必须绕过旧封装,直接调用 Paddle Inference API? |
Beta Was this translation helpful? Give feedback.
0 replies
-
最近在升级PaddleOCR3.0,发布后会全面支持IR模型 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
官方文档:
如果您的模型在自己的数据集上使用不同的字典文件进行训练,请确保将配置文件中的 修改为字典文件路径。character_dict_path
转换成功后,model save 目录下有三个文件:
inference/en_PP-OCRv3_mobile_rec/
├── inference.pdiparams # The parameter file of recognition inference model
├── inference.pdiparams.info # The parameter information of recognition inference model, which can be ignored
└── inference.pdmodel # The program file of recognition model
注意:如果您需要以新的 IR 模式(即格式)存储模型,请使用以下命令切换到新的 IR 模式:.json
export FLAGS_enable_pir_api=1
python3 tools/export_model.py -c configs/rec/PP-OCRv3/en_PP-OCRv3_mobile_rec.yml -o Global.pretrained_model=./pretrain_models/en_PP-OCRv3_rec_train/best_accuracy Global.save_inference_dir=./inference/en_PP-OCRv3_mobile_rec/
成功后,目录中将有两个文件:
inference/en_PP-OCRv3_mobile_rec/
├── inference.pdiparams # Model parameter file for the inference model
└── inference.json # Program file for the inference model
我使用了IR模式,得到了对应的模型,但是我该如何调用这个模型呢?
Beta Was this translation helpful? Give feedback.
All reactions