val dataloader batch size overwrites train dataloader batch size #7473
sebastiangonsal
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am using two different dataloaders for train and validation. These two dataloaders use two different datasets.
Mysteriously, the batch size for
val_dataloader
(which is defined aftertrain_dataloader
) overwrites the batch size fortrain_dataloader
. As a result, the training code uses batch size that is batch size for validation. Is this a known bug?This is how it is defined:
And then passed to pytorch lightning's
trainer.fit
:Beta Was this translation helpful? Give feedback.
All reactions