Replies: 1 comment 6 replies
-
The operator itself does not add positional encodings. Since the attention is only performed in local neighborhoods, it is not strictly necessary to use positional encodings here. If you want to add them nonetheless, it is expected that the user provides these as part of the input features, e.g., via |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello all,
I would like to ask you about the Graph Transformer Operator that you have available. Specifically I' m referring to The graph transformer operator from the “Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification” paper. In this paper, and eventually in your operator, do you make use of positional encodings?
Thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions