-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Closed
Labels
bugSomething isn't workingSomething isn't workingneeds triageWaiting to be triaged by maintainersWaiting to be triaged by maintainersver: 2.5.x
Description
Bug description
When using pytorch-lightning==2.5.1, the train_dataloader() method of a LightningDataModule is only called once at the very beginning of the training process (before Epoch 0), even when trainer.reload_dataloaders_every_n_epochs is explicitly set to 1 on the Trainer instance.
The expected behavior is that train_dataloader() should be called by the Trainer at the start of every epoch (Epoch 1, Epoch 2, etc.) when reload_dataloaders_every_n_epochs=1.
This prevents the intended dynamic reloading or switching of the training dataset each epoch.
Environment:
PyTorch Lightning Version: 2.5.1
PyTorch Version: 2.7.0
Python Version: 3.11
CUDA/GPU information: NVIDIA A100
What version are you seeing the problem on?
v2.5
Reproduced in studio
No response
How to reproduce the bug
Error messages and logs
# Error messages and logs here please
Environment
Current environment
#- PyTorch Lightning Version (e.g., 2.5.0):
#- PyTorch Version (e.g., 2.5):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
More info
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingneeds triageWaiting to be triaged by maintainersWaiting to be triaged by maintainersver: 2.5.x