How to use lightning with a dataset that is generated on the GPU? #14228
Unanswered
turian
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
if it's already on GPU, you can override the def transfer_batch_to_device(self, batch, *args, **kwargs):
return batch but note that it might create issues with the mulit-GPU case if your data is not on the same device as the model. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am a long-time lightning fan, on the other hand in the past I have gotten bitten by how opinionated it is about training loops. I have a new project in pure torch, and I'm debating whether I can easily migrate it to lightning. I have a related question here: #14229
My datatset is generated ON the GPU using torchsynth. All I need to know is the batch number, and that is used to deterministically generate the data on GPU. What is the best way to use lightning in this setting, to avoid unnecessary CPU-to-GPU data moves?
Beta Was this translation helpful? Give feedback.
All reactions