SparseTensor memory footprint #8495
Unanswered
jmpark0808
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Not all GNN operators can leverage sparse-matrix multiplication inside their computations (and that's where the memory gains will come from). For example, if you switch to |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I've recently started using PyTorch Geometric and wanted to check if I was using memory efficient aggregation correctly.
I have two reproducible scripts, one for utilizing edge_index and another for SparseTensor.
Using Edge Index
Which returns
Using SparseTensor
which returns
Is it because the adjacency matrix (i.e. edge_index) is dense? or am I doing something wrong with the implementation?
Any help is greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions