Support for Distributed Full-batch Training on large graphs #9339
Unanswered
zhuangbility111
asked this question in
Q&A
Replies: 1 comment
-
Distributed data parallel does not work in full-batch mode. What you can theoretically do is to use distributed training with |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! Does PyG provide the solution for the distributed full-batch training on large-scale graphs (ex. obgn-papers100M)?
Beta Was this translation helpful? Give feedback.
All reactions