Skip to content

Commit 7d4cf23

Browse files
LoserCheemsCopilot
andauthored
Update flash_dmattn/utils/mask.py
Co-authored-by: Copilot <[email protected]>
1 parent 510ef4d commit 7d4cf23

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

flash_dmattn/utils/mask.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,6 @@ def create_mask(
6464
If attention_mask is not of shape (batch_size, seq_len), it needs to match the shape of attention_bias.
6565
6666
Args:
67-
Args:
6867
attention_bias (torch.Tensor): The attention bias tensor of shape
6968
({batch_size|1}, {num_heads|num_kv_heads|1}, {query_len|1}, {key_len|1}).
7069
attention_mask (Optional[torch.Tensor]): The attention mask boolean tensor of shape

0 commit comments

Comments
 (0)