Skip to content

Reloading every_n_epochs (trainer flag reload_dataloaders_every_n_epochs) not working in 2.5.1. #20812

@carlos10garrido

Description

@carlos10garrido

Bug description

When using pytorch-lightning==2.5.1, the train_dataloader() method of a LightningDataModule is only called once at the very beginning of the training process (before Epoch 0), even when trainer.reload_dataloaders_every_n_epochs is explicitly set to 1 on the Trainer instance.

The expected behavior is that train_dataloader() should be called by the Trainer at the start of every epoch (Epoch 1, Epoch 2, etc.) when reload_dataloaders_every_n_epochs=1.

This prevents the intended dynamic reloading or switching of the training dataset each epoch.

Environment:

PyTorch Lightning Version: 2.5.1
PyTorch Version: 2.7.0
Python Version: 3.11
CUDA/GPU information: NVIDIA A100

What version are you seeing the problem on?

v2.5

Reproduced in studio

No response

How to reproduce the bug

Error messages and logs

# Error messages and logs here please

Environment

Current environment
#- PyTorch Lightning Version (e.g., 2.5.0):
#- PyTorch Version (e.g., 2.5):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):

More info

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingneeds triageWaiting to be triaged by maintainersver: 2.5.x

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions