Skip to content
Discussion options

You must be logged in to vote

If you have any change in terms of char_dict, language, or if the accuracy of the inference model is not good for you, then it maybe worth it to try finetune the pretrained model. With the original PPOCRv3 small model config (image size 48x320), it consumes about 19GB of memory on the GPU with batch_size=128, and the memory consumption will increase or decrease linearly with the batch_size. If there is no GPU available, you could try some online resources from Baidu or Google Colab.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@suheylkiris
Comment options

Answer selected by suheylkiris
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants