Skip to content

Commit de6d396

Browse files
soumyasanyalvince62s
authored andcommitted
Minor change in MultiHeadedAttention documentation (#1479)
* Minor change in documentation
1 parent 732b445 commit de6d396

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

onmt/modules/multi_headed_attn.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,8 +86,8 @@ def forward(self, key, value, query, mask=None,
8686
value vectors ``(batch, key_len, dim)``
8787
query (FloatTensor): set of `query_len`
8888
query vectors ``(batch, query_len, dim)``
89-
mask: binary mask indicating which keys have
90-
non-zero attention ``(batch, query_len, key_len)``
89+
mask: binary mask 1/0 indicating which keys have
90+
zero / non-zero attention ``(batch, query_len, key_len)``
9191
Returns:
9292
(FloatTensor, FloatTensor):
9393

0 commit comments

Comments
 (0)