Skip to content
Discussion options

You must be logged in to vote

I think this is what you want :)

def forward(self, x, edge_index, edge_attr):  # x: entity_embeddings, edge_attr: relation_embeddings
    head = x[edge_index[0]]
    tail = x[edge_index[1]]
    concat_embedding = torch.cat([head, tail, edge_attr], dim=1)
    embedding = self.linear(concat_embedding)  # nn.Linear()
    
    # prepare for attention
    alpha = self.activation(self.linear2(embedding))  # nn.LeakyRelu()
    # use softmax to get a normalized alpha
    alpha = torch_geometric.utils.softmax(alpha, edge_index[0])
    
    # assume we have the attention
    neighbour_embeddings = alpha * embedding
    
    # do aggregation
    out = torch_scatter.scatter_add(neighbour_embeddings, i…

Replies: 1 comment 7 replies

Comment options

You must be logged in to vote
7 replies
@icedpanda
Comment options

@rusty1s
Comment options

@icedpanda
Comment options

@rusty1s
Comment options

@icedpanda
Comment options

Answer selected by icedpanda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants