Skip to content
Discussion options

You must be logged in to vote

Hey @sandylaker! You can use torch.distributed.all_reduce. There is also within the LightningModule this function, however it may be better to expose this within lightning to make it easier to access:

x = self.trainer.accelerator.training_type_plugin.reduce(x)

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by Borda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment