How can we decrease the size of resources for RGATConv
?
#10447
songsong0425
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear PyG community,
Greetings, always thank you for your great effort for the graph-based deep learning approaches.
I have a simple question about the
RGATConv
, especially training time and GPU memory usage.While I am using it for the link prediction task, I feel it is highly dependent on the size of the subgraph generated from the
LinkNeighborLoader
of thenum_neighbors
parameter.Based on the A100 GPU, I frequently encountered the GPU OOM problem when I set the
num_neighbors
to more than 256.Here's the code snippet.
It took almost 80GB of GPU memory and 20 minutes per epoch.
In this case, can we save them by modifying the
RGATConv
code?If you have any idea, please let me know.
Thank you for reading this question.
Beta Was this translation helpful? Give feedback.
All reactions