Replies: 2 comments 4 replies
-
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi, I get that |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, can anyone give me insights into the differences between the 2 layers? I get that the computation of the attention is different between them, but other than that, why would someone use
TransformerConv
instead ofGATConv
for a specific task? Thanks a lot!Beta Was this translation helpful? Give feedback.
All reactions