How to organize data when I want to use multi-GPU in one server. #10191
Unanswered
Zrealshadow
asked this question in
Q&A
Replies: 1 comment
-
like this example: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I want to train the model on multi-GPU on same server.
Currently, I use
NeighborLoader
to generate sub-HeteroData as a mini-batch for model training.I notice there is a
DataListLoader
supporting multi-GPU. but is there any class likeNeighborListLoader
?As for the tutorial, I notice the tutorial in https://pytorch-geometric.readthedocs.io/en/2.6.0/tutorial/distributed_pyg.html
which is more related to distributed training.
Is there any tutorial for multi-GPU training on same server ?
Beta Was this translation helpful? Give feedback.
All reactions