Skip to content
Discussion options

You must be logged in to vote

You want to drop the global_add_pool here. In addition, your final Linear module should output hidden_size features instead of 50265. Then, you can do:

x = self.lin(x)
x, mask = to_dense_batch(x, batch)
return x  # [batch_size, num_nodes, num_features]

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by afonso-sousa
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants