Replies: 1 comment
-
Hello, it seems that the issue is caused by your image being too large while the text box is too small. For example: [[125, 79], [546, 79], [546, 93], [125, 93]], its height is 14, and the original image height is 3455. If you scale it down to 960, the scaled bounding box height would be approximately 5 pixels, which makes it difficult for the model to converge. You can try using a larger scaling size, such as Resize target_size: [2000, 2000]. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi All,
I am attempting to finetune the PPOCRv5_server_det model
My input images have size 21184x3455
I have tried to remove some augmentations present in the default config
However, it seems as if the bounding boxes are not being scaled with the images as I am getting errors like
With EastRandomCropData, this is fixed but my loss flatlines at 1 even after many epochs.
May I verify that I am setting up training right?
This is what I am doing
<image_file> [{"transcription": "xxxxx", "points": [[125, 79], [546, 79], [546, 93], [125, 93]]}, {"t
Beta Was this translation helpful? Give feedback.
All reactions