-
Hi! I'm trying out different parameters for the Explainer class, but I'm getting confused what parameters to choose.
Do I choose "phenomenon" or "model"? I'm using the amazing GNNExplainer. The feature importances in the visualize_feature_importance function are just a summation of the node_mask. Don't we have to normalize them first for GNNExplainer? @RexYing |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Which options you want to choose depends on what you want to explain and which output you want to receive. Phenomenon computes gradients based on the loss with respect to the target, while model computes gradients based on the loss with respect to the model output. Attributes/Common attributes depends on whether you want to receive feature-level importance or node/feature-level importance scores. Whether to explain per node or per graph is a good question, but also depends on the use-case. Usually, people are more interested in node-level explanations. However, graph-level explanations might be preferred if you want to find importance features across the whole dataset. |
Beta Was this translation helpful? Give feedback.
Which options you want to choose depends on what you want to explain and which output you want to receive. Phenomenon computes gradients based on the loss with respect to the target, while model computes gradients based on the loss with respect to the model output. Attributes/Common attributes depends on whether you want to receive feature-level importance or node/feature-level importance scores. Whether to explain per node or per graph is a good question, but also depends on the use-case. Usually, people are more interested in node-level explanations. However, graph-level explanations might be preferred if you want to find importance features across the whole dataset.