Callback on_train_batch_end batch on CPU #6945
Unanswered
import-antigravity
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment 8 replies
-
Can you give a runnable example that demonstrates this? I can't see where Lightning would move these outputs back to CPU. def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# pl_module, outputs, batch should all be on the same device |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I have a callback with methods for on_train_batch_end and on_validation_batch_end, and I want to take the same batch and run it through another model. However, the data is on the CPU so I have to move it back to the GPU. Is there any way to avoid this redundant moving of data? It's really slowing things down.
Beta Was this translation helpful? Give feedback.
All reactions