-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Open
Labels
Description
I've been training a state classification model and noticed there's no way to tell how well it actually performs. The model compiles with accuracy metrics and uses a 20% validation split, but the history from model.fit() is discarded.
Would be useful to save the final accuracy and validation accuracy to .training_metadata.json so users can see if their model is overfitting or if they need more training samples.
Currently the metadata only has:
last_training_datelast_training_image_count
Adding something like final_accuracy, final_val_accuracy, and final_loss would help users understand if their model is actually learning or just memorizing the training data.
I have a small patch that does this (~15 lines changed), happy to create a PR if there's interest.
Reactions are currently unavailable