Skip to content

Commit 87ce7cc

Browse files
committed
Removes default initialization of attention_bias in _flash_dynamic_mask_attention_forward
1 parent a61db8c commit 87ce7cc

File tree

1 file changed

+0
-3
lines changed

1 file changed

+0
-3
lines changed

flash_dmattn/integrations/modeling_flash_dynamic_mask_attention_utils.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,9 +80,6 @@ def _flash_dynamic_mask_attention_forward(
8080
flash_kwargs["deterministic"] = deterministic
8181
if softcap is not None:
8282
flash_kwargs["softcap"] = softcap
83-
84-
if attention_bias is None:
85-
attention_bias = torch.zeros((batch_size, num_kv_heads, query_length, key_length), dtype=dtype, device=query_states.device)
8683

8784
query_states, key_states, value_states, attention_bias = fdma_peft_integration_check(
8885
query_states, key_states, value_states, attention_bias, target_dtype

0 commit comments

Comments
 (0)