Can I run paddle with models downloaded locally? #15716
Unanswered
MarekHerty
asked this question in
Q&A
Replies: 2 comments
-
Hello, there is no issue with the model you downloaded; the JSON format is the default new model format for Paddle 3.0. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thank you for raising this question. We will soon host all PaddleOCR 3.0 models on Hugging Face, providing multiple download sources. By default, the models will be downloaded from Hugging Face. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I got really excited when the new verion coma out, However when I downloaded the model inference files it does not seem to work, It wants to dowload the model which is not possible because the server I am using has no access to internet,
=> RuntimeError: Download from https://paddleocr.bj.bcebos.com/PP-OCRv4/chinese/ch_PP-OCRv4_det_infer.tar failed. Retry limit reached
The files that get dowloaded in model.tar are:
inference.json
inference.pdiparams
inference.yml
Any idea what am I doing wrong? or what steps to take to fix this? In addition as I understand it the only models that support latin are v3 models (However I cant seem to find the recognition models, have they been retired?)
Beta Was this translation helpful? Give feedback.
All reactions