can dataparallel or distributed dataparallel bring performance speed up for training graph classification tasks? #2705
Unanswered
wzsunshine
asked this question in
Q&A
Replies: 1 comment 6 replies
-
This depends and is best evaluated empirically. If you can already process a large batch size on a single GPU (e.g., |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I have one general question is that, when training the GNNs on multi-GPU, will it have performance speedup, on what condition it will have or not? Any hint will be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions