limit_train_batches training #8853
Unanswered
roman-vygon
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
The dataset is shuffled when you create the dataloader/the sampler. So when you specify to reload the loaders for each epoch, you should be fine :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone!
I've been dealing with this memory leak and couldn't find a good solution, so I decided to reduce the epoch size of my dataset (since memory clears every epoch).
I've set limit_train_batches to 0.1 and increased the number of epochs 10 times. Is that OK?
Is the dataset shuffled before choosing the 10% for the next epoch or not?
I'm assuming that 10 epochs with limit_train_batches=0.1 is almost the same is 1 epoch with limit_train_batches=1.0, with only difference being that items can be randomly chosen multiple times.
Beta Was this translation helpful? Give feedback.
All reactions