Return_attention_weights when using HeteroConv() #6476
Unanswered
Jerome-Cong
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Mh, this is an interesting thought. I guess this is not really supported at the moment. One idea is to simply create your own instance of outs = defaultdict(list)
for edge_type, edge_index in edge_index_dict.items():
src, _, dst = edge_type
out, attn = conv_dict[edge_type]((x_dict[src], x_dict[dst]), edge_index,
return_attention_weights=True)
outs[dst].append(out) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Attention-based graph operator like
GATv2Conv
andTransformerConv
have an optionreturn_attention_weights
when forward the layer. When usingHeteroConv()
to set up different aggregation for different edge types, for each layer different aggregation operation forward together like:Is there any way to pass the parameter
return_attention_weights
to theGATv2Conv
operator for example?Beta Was this translation helpful? Give feedback.
All reactions