Replies: 1 comment
-
根据您的描述和提供的上下文,以下是关于使用 Paddle Lite 推理 .nb 模型时推理结果变差的详细分析和建议: 问题解析
解决方案
参考相关 Issue
总结
Response generated by feifei-bot | chatgpt-4o-latest |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🔎 Search before asking
🐛 Bug (问题描述)
基于仓库的 predict_system.py 加入了 --use_lite 的分支,使用 Paddle Lite 推理 .nb 模型,推理效果很差,且定位到 rec 模型效果变差得很明显;想知道使用 Paddle Lite 推理 .nb 模型和 predict_system.py 的前后预处理是不是不兼容,感谢。
正常结果:

v4 nb模型推理结果(结果差):

🏃♂️ Environment (运行环境)
x86 GPU
paddle lite: 2.14-rc
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
转换脚本:
paddle_lite_opt --model_dir=./models/v4_ori/ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer --optimize_out=./models/v4_nb/ch_PP-OCRv3_det_opt --valid_targets=x86 --optimize_out_type=naive_buffer
paddle_lite_opt --model_dir=./models/v4_ori/ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer --optimize_out=./models/v4_nb/ch_PP-OCRv3_rec_opt --valid_targets=x86 --optimize_out_type=naive_buffer
paddle_lite_opt --model_dir=./models/v4_ori/ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer --optimize_out=./models/v4_nb/ch_ppocr_mobile_v2.0_cls_opt --valid_targets=x86 --optimize_out_type=naive_buffer
Beta Was this translation helpful? Give feedback.
All reactions