Replies: 1 comment 2 replies
-
One way to write this would be: Sequential('x, edge_index, size, return_att', [
(GATConv(...), 'x, edge_index, size, return_att' -> 'x, a1'),
(ReLU(), 'x -> x'),
(GATConv(...), 'x, edge_index, size, return_att' -> 'x, a2'),
(ReLU(), 'x -> x'),
(lambda x, a1, a2: [x, a1, a2], "x, a1, a2 -> o"),
]) However, I think such custom model is better defined without def forward(self, x, edge_index):
x, a1 = self.conv1(x, edge_index, return_attention_weights=True)
x= x.relu()
x, a2 = self.conv1(x, edge_index, return_attention_weights=True)
x= x.relu()
return x, a1, a2 |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am using
torch_geometric.nn.Sequential
withGATConv
and I need to return the attention weights at multiple layers of the network. For dummy 1-layer network, I can do the following:However, I am interested in building
n
-layer networks wheren
is a hyper-parameter and I would like to be able to output the attention weights at each layer. I have tried the following for a dummy 2 layer networkI think this works well but wondering if there is a better way?
Edit - I figured it out quickly after posting and thus edited the code posted in this discussion. However, open to suggestions on how to make this cleaner. For instance, seems I have to be specify
Size
in both Sequential and in the forward pass for each conv layer. Would be cleaner if I could just specifySequential("x, edge_index, return_attention_weights")
andconv(x, edge_index, True)
without having to specifySize=None
since it is defaulted toNone
inGATConv
Beta Was this translation helpful? Give feedback.
All reactions