Skip to content

Commit ff4bff9

Browse files
hlkyyiyixuxu
authored andcommitted
fix use_mask_in_transformer=False
1 parent 7c5a46f commit ff4bff9

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

src/diffusers/models/transformers/transformer_lumina2.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,8 @@ def __call__(
129129

130130
# scaled_dot_product_attention expects attention_mask shape to be
131131
# (batch, heads, source_length, target_length)
132-
attention_mask = attention_mask.bool().view(batch_size, 1, 1, -1)
132+
if attention_mask is not None:
133+
attention_mask = attention_mask.bool().view(batch_size, 1, 1, -1)
133134

134135
query = query.transpose(1, 2)
135136
key = key.transpose(1, 2)

0 commit comments

Comments
 (0)