Replies: 2 comments 1 reply
-
Hello, if you want to use an fp16 model, you can simply set the value of precision to fp16 to achieve this. For details, please refer to the Method and parameter descriptions in documentation: https://paddlepaddle.github.io/PaddleOCR/main/en/version3.x/module_usage/text_detection.html#3-quick-start from paddleocr import TextDetection
model = TextDetection(model_name="PP-OCRv5_server_det", precision="fp16")
output = model.predict("general_ocr_001.png", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json") |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thank you so much, |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I want to covert model PP-OCRv5_server_det from fp32 to fp16,
I already use the method:
convert model to onnx, and use from onnxconverter_common import float16 to convert model to fp16
but the inference result very bad,
In pytorch very simple to convert model to fp16 by use model.half()
but with paddle I do not how to do it?
In paddle have any method similar with model.half() in pytorch ?
Plese instruct me.
Beta Was this translation helpful? Give feedback.
All reactions