CPU inference failing #13039
Replies: 3 comments
-
最近按照官方教程在ubuntu上进行快速推理也遇到这个问题,而且在两台设备上都复现了 |
Beta Was this translation helpful? Give feedback.
-
I am having the same issue working with Arch linux:
And this is the line causing the problem:
I knew that after keeping it alone in my code and I had that error also. I hope anyone can help, since everything was working then suddenly it stopped working ! |
Beta Was this translation helpful? Give feedback.
-
This is caused by PaddlePaddle. Please try to check whether PaddlePaddle is installed successfully. >> import paddle
>> paddle.utils.run_check() |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I used the latin PP-OCRv3 model for recognition and fine-tuned it to my data and exported the model and performed the inference. All that worked fine. Then i tried to use the model and perform the inference on the CPU. this is the everything i ran:
this works on CPU as well. Can you help me why does my fine tuned model is not working? Is there a specific way to export it so it can be ran on CPU as well
I exported it like this:
!python3 tools/export_model.py -c /content/PaddleOCR/latin_PP-OCRv3_rec.yml -o Global.pretrained_model=/content/PaddleOCR/output/v3_latin_mobile/latest Global.save_inference_dir=/content/inference_new
Beta Was this translation helpful? Give feedback.
All reactions