Replies: 1 comment 3 replies
-
AFAIK, node explanation returns you a matrix of shape |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I ran the sample code in captum_explainability.py and find three choices for explanation on 'edge', 'node' and 'edge and node'.
I thought the 'node' one would be explanation on node features(find important features) but the output i got is a vector of size 2708(number of nodes), which means it select important nodes to the current nodes, am i right? I am wondering if I can find node feature importance through captum in pyg. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions