-
The docs mention that "In some cases, GNNs can also be implemented as a simple-sparse matrix multiplication. As a general rule of thumb, this holds true for GNNs that do not make use of the central node features x_i or multi-dimensional edge features when computing messages." Is there no way to make the multi dimensional edge feature case more efficient? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Not that I am aware of. At some point, you have to materialize edge representations if you are working with multi-dimensional edge features. One way to make this more memory-efficient is by trading memory consumption with runtime, e.g., via |
Beta Was this translation helpful? Give feedback.
Not that I am aware of. At some point, you have to materialize edge representations if you are working with multi-dimensional edge features. One way to make this more memory-efficient is by trading memory consumption with runtime, e.g., via
torch.utils.checkpoint
.