Skip to content
This repository was archived by the owner on Apr 25, 2023. It is now read-only.

about the way to calculate attention weight #15

@FreyWang

Description

@FreyWang

It seems that the way to calculate attention weight is different from origin paper: softmax(v* tanh(W*[s,h])), relu are used after softmax here, can you give some reasons or reference?

` def forward(self, hidden, encoder_outputs):
timestep = encoder_outputs.size(0)
h = hidden.repeat(timestep, 1, 1).transpose(0, 1)
encoder_outputs = encoder_outputs.transpose(0, 1) # [BTH]
attn_energies = self.score(h, encoder_outputs)
return F.relu(attn_energies).unsqueeze(1)

def score(self, hidden, encoder_outputs):
    # [B*T*2H]->[B*T*H]
    energy = F.softmax(self.attn(torch.cat([hidden, encoder_outputs], 2)), dim=2)
    energy = energy.transpose(1, 2)  # [B*H*T]
    v = self.v.repeat(encoder_outputs.size(0), 1).unsqueeze(1)  # [B*1*H]
    energy = torch.bmm(v, energy)  # [B*1*T]
    return energy.squeeze(1)  # [B*T]`

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions