nn.TransformerConv leading CUDA out of memory? #3388
-
I'm just running a simple test on ogbl-collab dataset. It seems when the node=235868 and edge=2358104, the following line raise a CUDA out of memory error. The code runs smoothly on small-scale demo input data. It looks it's a multiplication of [2358104, 1, 256] * [2358104, 1, 1]? I'm new to the geometric toolbox; this is calculating the whole set of edges in the graph right? Appreciate for any helps. ( ) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
|
Beta Was this translation helpful? Give feedback.
TransformerConv
scales linearly with the number of edges in your graph, so it is expected that there is a certain threshold to fit everything into GPU memory. I think you will need to use some scalability techniques to makeTransformerConv
applicable on larger graphs, e.g, vialoader.NeighborLoader
,loader.ClusterLoader
or vialoader.GraphSAINTSampler
.