Two ways to get explanation (important edges) of a trained GNN model #7083
Unanswered
Wienhannover
asked this question in
Q&A
Replies: 1 comment 5 replies
-
It depends on the underlying explainer algorithm and model. For example, not all PyG models support |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, when it comes to explaining a trained GNN model, is there any difference between using learnable edge weights, such as
#------------------------------------------------------------------------------
masked_prediction = model_to_explain(feats, graph, edge_weights=self.edge_mask)
#------------------------------------------------------------------------------
and
#------------------------------------------------------------------------------
self.edge_mask = torch.nn.Parameter(torch.randn(E) * std)
for module in self.model_to_explain.modules():
****if isinstance(module, MessagePassing):
********module.explain = True
********module.edge_mask = self.edge_mask
masked_prediction = model_to_explain(feats, graph)
#------------------------------------------------------------------------------
?
Beta Was this translation helpful? Give feedback.
All reactions