Replies: 2 comments 2 replies
-
Would NeibhorLoader be useful in this case. It samples subgraphs, and the subgraphs could be loaded on your GPU. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Will this work for graph level regression problems? I think the MACE model does a global pooling at the end step to get the total energy of the system so will the |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi pyG team and community,
I am looking for some help to implement an solution that might involve a multi-GPU inference of a single large graph object through the pytorch geometric model.
The context of the problem is basically involving running the MACE Neural Network Force Field which implements pytorch_geometric under the hood and the protein system by which the NNFF will be evaluated on is constructed to as a single
mace.data.atomic_data.AtomicData
which inherits fromtorch_geometric.data.Data
. The single protein graph has about 5000+ nodes (atoms) and 100k+ edges which is far too large to fit on a single A100 40GB, due to the complexity of the MACE model as well. Would it be possible for this graph to be partitioned across multiple GPUs and for inference to be run on multiple GPUs and consolidated as a single final readout?Appreciate any advice and help I could get on this.
Beta Was this translation helpful? Give feedback.
All reactions