我使用onnxruntime对齐官方的推理,推理速度比使用paddlepaddle框架推理快很多(速度快5,6倍) #13195
-
问题描述 / Problem Description我使用onnxruntime对齐官方的推理,推理速度比使用paddlepaddle框架推理快很多(速度快5,6倍),是框架有问题吗 运行环境 / Runtime Environment
复现代码 / Reproduction Code代码:https://github.com/jingsongliujing/OnnxOCR.git 完整报错 / Complete Error Message可能解决方案 / Possible solutions附件 / Appendix |
Beta Was this translation helpful? Give feedback.
Replies: 9 comments 3 replies
-
是比 paddle inference 快吗 |
Beta Was this translation helpful? Give feedback.
-
是的,要快很多 |
Beta Was this translation helpful? Give feedback.
-
特别是v4版本 |
Beta Was this translation helpful? Give feedback.
-
其他几个模型不明显,就v4 mobile 比较明显 |
Beta Was this translation helpful? Give feedback.
-
用 |
Beta Was this translation helpful? Give feedback.
-
没试过,但是官方框架它本身会加载很多配置,估计推理速度也会比自己拆出来慢一些 |
Beta Was this translation helpful? Give feedback.
-
我目前只做了检测,分类,识别三个模型的串联,表格,抽取啥的没考虑 |
Beta Was this translation helpful? Give feedback.
-
这和我们做的RapidOCR似乎是一样的 |
Beta Was this translation helpful? Give feedback.
-
onnx c++,gpu 服务端模型v4,推理速度 7ms 识别单行文本10个字,3060 |
Beta Was this translation helpful? Give feedback.
有的,目前v1.3.22里面集成的就是v4模型。你看到的应该是目录名是v3的。