Handling extreme cases in validation sanity check #15381
-
I'm using the default settings for the validation sanity check and think it is a useful feature. However, because the validation sanity check only processes a tiny amount of data, there might be some extreme cases occuring. Specifially, I'm using torchmetric's ConfusionMatrix, and most of the time not all of the existing classes are represented in the first few batches. Therefore the computation of the confusion matrix results in a warning:
I get this for every training run and it's a bit annoying. What's the best way to prevent this warning from occuring in such cases? I don't want to filter this warning globally, because if this happens in the actual validation it is definitely an issue. So I wondered how such extreme cases in the validation sanity check are supposed to be handled. Is it somehow possible to run some code before and after this sanity check, to disable this warning temporarily? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @ptoews |
Beta Was this translation helpful? Give feedback.
Hi @ptoews
on_train_start()
runs right after sanity-checking, so you can filter out the warning until sanity check finishes and remove the warning filter inon_train_start()
hook so that you get the warning during the actual validations. FYI, here's the execution order of hooks:https://pytorch-lightning.readthedocs.io/en/1.7.7/common/lightning_module.html#hooks