Iterate over all validation data while using limit_val_batches #14309
-
I have 10,000s of images for training a semantic segmentation model. I am therefore using I however note (i.e Pytorch Lightning limit_val_batches and val_check_interval behavior - Stack Overflow) that when using Rather than using the same ‘N’ batches starting at index 0 for each validation epoch, I would like the dataloader when using How would I go about implementing this behavior with limit_val_batches? or is this not the expected thing to do? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
|
Beta Was this translation helpful? Give feedback.
-
Thanks @tshu-w. In that case, what is the strategy when your validation dataset (or training set) has 10,000 plus images in it and you don't want validation to validate against all these images every epoch, but rather chunk through the dataset each epoch a small bit at a time in a sequential manner? |
Beta Was this translation helpful? Give feedback.
-
For anyone with a similar problem, aniketmaurya provided an answer at over at the now defunct pytorch forums which also solves this issue. it is as follows:
If you are using this method and want to compare results across training you will want to use the |
Beta Was this translation helpful? Give feedback.
Thanks @tshu-w.
In that case, what is the strategy when your validation dataset (or training set) has 10,000 plus images in it and you don't want validation to validate against all these images every epoch, but rather chunk through the dataset each epoch a small bit at a time in a sequential manner?