We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 510ef4d commit 7d4cf23Copy full SHA for 7d4cf23
flash_dmattn/utils/mask.py
@@ -64,7 +64,6 @@ def create_mask(
64
If attention_mask is not of shape (batch_size, seq_len), it needs to match the shape of attention_bias.
65
66
Args:
67
- Args:
68
attention_bias (torch.Tensor): The attention bias tensor of shape
69
({batch_size|1}, {num_heads|num_kv_heads|1}, {query_len|1}, {key_len|1}).
70
attention_mask (Optional[torch.Tensor]): The attention mask boolean tensor of shape
0 commit comments