How are callback calls handled in multi-gpu mode? #8607
Answered
by
ananthsub
ktrapeznikov
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
-
Say I have a callback that changes a hyper-parameter of the underlying model before every epoch. class ChangeHyperParam(pl.Callback):
def on_train_epoch_start(self, trainer, pl_module):
pl_module.hyper_param1 = func(trainer.current_epoch) What happens when I use this callback in DDP mode. Will this call be called on every GPU automatically? Or do I have to do something else? |
Beta Was this translation helpful? Give feedback.
Answered by
ananthsub
Jul 29, 2021
Replies: 1 comment
-
Callbacks run on all ranks, so this would be called on each GPU in distributed training. As long as |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
ktrapeznikov
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Callbacks run on all ranks, so this would be called on each GPU in distributed training. As long as
func
returns the same value given the inputs, this should be fine