Skip to content

Custom TQDMProgressBar changes not reflected #20384

@oseymour

Description

@oseymour

Bug description

I wrote a custom TQDMProgressBar class with some changes. When I run train.fit() in JupyterLab the default progress bar is still used, however.

What version are you seeing the problem on?

v2.4

How to reproduce the bug

from lightning.pytorch.callbacks import TQDMProgressBar

class CustomProgBar(TQDMProgressBar):
    def __init__(self, ncols: int = 100):
        super().__init__(leave=True)
        self.ncols = ncols
    
    def init_sanity_tqdm(self):
        bar = super().init_sanity_tqdm()
        bar.ncols = self.ncols
        return bar

    def init_train_tqdm(self):
        bar = super().init_train_tqdm()
        bar.ncols = self.ncols
        return bar

    def init_validation_tqdm(self):
        bar = super().init_validation_tqdm()
        bar.ncols = self.ncols
        return bar


trainer = L.Trainer(accelerator="cpu", max_epochs=5, callbacks=[CustomProgBar(),], log_every_n_steps=1)

# `model` and `data` are LightningModule and LightningDataModule instances, respectively.
# I can include the code for this if you think it's needed for debugging this.
trainer.fit(model, datamodule=data)

Error messages and logs

Printout without callbacks argument passed:

 | Name  | Type     | Params | Mode
------------------------------------------
0 | model | UNet     | 3.0 M  | eval
1 | loss  | DiceLoss | 0      | eval
------------------------------------------
3.0 M     Trainable params
0         Non-trainable params
3.0 M     Total params
11.893    Total estimated model params size (MB)
0         Modules in train mode
112       Modules in eval mode
Sanity Checking: |                                                                               | 0/? [00:00<…
Training: |                                                                                      | 0/? [00:00<…

Printout with just the CustomProgBar callaback:

| Name  | Type     | Params | Mode
------------------------------------------
0 | model | UNet     | 3.0 M  | eval
1 | loss  | DiceLoss | 0      | eval
------------------------------------------
3.0 M     Trainable params
0         Non-trainable params
3.0 M     Total params
11.893    Total estimated model params size (MB)
0         Modules in train mode
112       Modules in eval mode
Sanity Checking: |                                                                               | 0/? [00:00<…
Training: |                                                                                      | 0/? [00:00<…

Environment

Current environment
#- PyTorch Lightning Version (e.g., 2.4.0): 2.4.0
#- PyTorch Version (e.g., 2.4): 2.4.0
#- Python version (e.g., 3.12): 3.12.3
#- OS (e.g., Linux): Windows 10
#- CUDA/cuDNN version: n/a
#- GPU models and configuration: none, CPU only
#- How you installed Lightning(`conda`, `pip`, source): pip
#- TQDM version: 4.66.6

More info

No response

cc @lantiga

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions