Replies: 1 comment 1 reply
-
You could save memory by only explaining a subgraph. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am trying to use the interpretability package with a model created for a heterogeneous graph. The problem comes in that when trying to generate the interpretation there is not enough memory.
It is a model with 6 layers and hidden channel of 256, based on GraphSAGE for nodes of 2 categories. The dataset is:
`HeteroData(
a={ x=[16810, 768] },
b={ x=[4619, 53] },
(a, related_to, b)={ edge_index=[2, 721540] },
(a, is_a, a)={ edge_index=[2, 42142] },
(b, rev_related_to, a)={ edge_index=[2, 721540] }
)`
The GPU has 8 GB of memory. Making the model smaller right now is not an option. Is there anything I can do to be able to use the interpretability package?
Thank you all!
Beta Was this translation helpful? Give feedback.
All reactions