Replies: 1 comment 2 replies
-
The problem with standard I described this here in more detail. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have written my own PyTorch lightning module and fed the
torch_geometric.loader.Dataloader
to the module. Passed it to the pl.Trainer usingstrategy='ddp'
. It seems that this pipeline trains without error.So why in the documentation, we have the following note?
Currently only the pytorch_lightning.strategies.SingleDeviceStrategy and pytorch_lightning.strategies.DDPSpawnStrategy training strategies of [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/latest/guides/speed.html) are supported in order to correctly share data across all devices/processes:
What data is shared? I see that the only difference from
torch_geometric.loader.Dataloader
to classical Pytorch Dataloader is theCollater
. However, I debugged it a little, because my input is aData
list. It seems good enough for multiprocessing running.Thanks,
Han
Beta Was this translation helpful? Give feedback.
All reactions