Training Paddle Slim for mobile but it is too bad #14350
Replies: 4 comments 3 replies
-
Beta Was this translation helpful? Give feedback.
-
I think we can first confirm whether the trained model can be inferred normally on the validation set, and then see whether the converted model is correct. |
Beta Was this translation helpful? Give feedback.
-
It seems like your issue is related to the conversion and deployment of your PaddleOCR model using Paddle Lite. Based on the provided details, here are some suggested steps to improve your deployment and resolve the issue of only one character being recognized: 1. Verify Your Model Conversion Process
2. Check the Dictionary File
3. Update the Model's Input Settings
4. Test with Pretrained Models
5. Debugging the Android Demo
6. Validate Your Model's Output
7. Check Paddle Lite Version Compatibility
8. Quantization Considerations
9. Further Resources
By following these steps, you should be able to identify and resolve the issue with your PaddleOCR mobile deployment. If the issue persists, consider sharing more details about your training and deployment environment. Response generated by 🤖 feifei-bot | chatgpt-4o-latest |
Beta Was this translation helpful? Give feedback.
-
paddle_lite_opt --model_file=./latest/inference.pdmodel --param_file=./lite/latest/inference.pdiparams --optimize_out=./latest --valid_targets=arm --optimize_out_type=naive_buffer => this is my conversion with 2.14rc. there are no problems |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have trained Paddle slim with 3 millions images for OCR task with Vietnamese text, the accuracy is maybe about 90% with 60 epochs, but when I use paddle_lite_opt to convert to *.nb file and test it in android_demo, Paddle Lite version 2.14rc, it only show 1 character in every images. Please give me instructions to enhace it~ Many thanks

Beta Was this translation helpful? Give feedback.
All reactions