[PPOCR det] Precision and Recall quickly achieves 1.0 #14791
Replies: 1 comment
-
Your observation that Precision and Recall reach 1.0 unusually fast during fine-tuning suggests potential issues with training configuration, dataset, or evaluation methodology. Here are possible explanations and solutions: Possible Reasons:
Recommendations & Steps to Validate:
Related Discussion Links:
ConclusionYour high Precision/Recall scores likely result from overfitting, evaluation biases, or improper post-processing parameters. Adjust the dataset, parameter settings, and augmentation techniques to obtain a more reliable evaluation. Response generated by 🤖 feifei-bot | chatgpt-4o-latest |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to do fine-tuning text detection model. Surprisingly, the model achieves both Precision and Recall to 1.0 quickly after few epochs. It happens when I just use default config value shrink_ratio = 0.45. The loss also does not decrease much (from 1.6 -> 1.4). Every evaluation after that Precision and Recall always = 1.0 and model of following epochs is chosen as best model. When I change shrink_ratio = 0.3, Precision and Recall reduce to 0.91xxx. My question is the method to measure Precision and Recall be able to trusted? I use 600 images to train and 150 images for evaluation as recommended in document
Beta Was this translation helpful? Give feedback.
All reactions