-
Hi, I'm new to GNN and PyG. I want to concatenate a head embedding with corresponding tails and relations. And then calculate its attention. How to utilise PyG and sparse tensor to achieve this efficiently? This is what I have so far. # preproce edge index
edge_sets = torch.as_tensor(edge, dtype=torch.long) # edge (head, tail, relation)
edge_idx = edge_sets[:, :2].t().contiguous()
edge_type = edge_sets[:, 2]
entity_embeddings = nn.Embedding(n_entity, e_dim)
relation_embeddings = nn.Embedding(n_relation, r_dim)
# example forward function
def forward(self, x, edge_index, edge_attr): # x: entity_embeddings, edge_attr: relation_embeddings
head = x[edge_index[0]]
tail = x[edge_index[1]]
concat_embedding = torch.cat([head, tail, edge_attr], dim=1)
embedding = self.linear(concat_embedding) # nn.Linear()
# prepare for attention
alpha = self.activation(self.linear2(embedding)) # nn.LeakyRelu()
# use softmax to get a normalized alpha
# here I am not sure how to normalize the coefficients across all the triplets that connect to HEAD_A for example
# assume we have the attention
neighbour_embeddings = torch.matmul(attention, embedding)
# do aggregation
# a simple add for now
new_embedding = self.linear3(head + neighbour_embeddings)
# but this would give a shape of (n_edge, n_dim) rather than (n_entity, n _dim)
# how to make sure we have the correct shape that represents n_entity? So basically, for each
I can loop and filter the index, but I guess there will be a more efficient way to do this using PyG and spare tensosr. Thanks in advance |
Beta Was this translation helpful? Give feedback.
Answered by
rusty1s
Aug 19, 2022
Replies: 1 comment 7 replies
-
I think this is what you want :) def forward(self, x, edge_index, edge_attr): # x: entity_embeddings, edge_attr: relation_embeddings
head = x[edge_index[0]]
tail = x[edge_index[1]]
concat_embedding = torch.cat([head, tail, edge_attr], dim=1)
embedding = self.linear(concat_embedding) # nn.Linear()
# prepare for attention
alpha = self.activation(self.linear2(embedding)) # nn.LeakyRelu()
# use softmax to get a normalized alpha
alpha = torch_geometric.utils.softmax(alpha, edge_index[0])
# assume we have the attention
neighbour_embeddings = alpha * embedding
# do aggregation
out = torch_scatter.scatter_add(neighbour_embeddings, index=edge_index[1]) |
Beta Was this translation helpful? Give feedback.
7 replies
Answer selected by
icedpanda
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I think this is what you want :)