Skip to content
Discussion options

You must be logged in to vote

As far as I understand the paper, this is correct, as the attention module computes attention for each node belonging to different sets of hyperedges, not attention across all nodes belonging to the same hyperedge.

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@1zzt
Comment options

@rusty1s
Comment options

@rusty1s
Comment options

@1zzt
Comment options

Answer selected by 1zzt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants