predict_kie_token_ser script is slow #12406
Unanswered
danteblink
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have deployed the ser model on a docker container, and the predictions take 8 seconds or more. I'm using the GPU option (Tesla T4). Is this normal behavior?
Is there any way to improve the performance of the model? So far, I have downloaded all models and stored them locally in the container to avoid downloading them when running the script. It reduced the inference from 30 seconds to 8-10 seconds, but I need help to achieve better times.
Beta Was this translation helpful? Give feedback.
All reactions