Skip to content
Discussion options

You must be logged in to vote

Hi @FutureWithoutEnding

When the batch is transferred to device
I think the pseudocode in the docs explains when a batch is transferred to each device very well:
https://pytorch-lightning.readthedocs.io/en/1.6.5/common/lightning_module.html#hooks

What is transferred to device
See the list of supported data structures in the documentation. If you have custom data structure, you need to override this hook in your LightningModule:
https://pytorch-lightning.readthedocs.io/en/1.6.5/common/lightning_module.html#transfer-batch-to-device

Also, if you need to manually transfer tensors to device, you can utilise self.device so that your code stays hardware-agnostic. your_tensor.to(self.device):
htt…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@FutureWithoutEnding
Comment options

Answer selected by FutureWithoutEnding
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
data handling Generic data-related topic accelerator: cuda Compute Unified Device Architecture GPU pl Generic label for PyTorch Lightning package
2 participants