Skip to content

Commit a44f962

Browse files
committed
remove unused kwargs
1 parent 15b5fef commit a44f962

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

src/diffusers/models/attention_processor.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2813,7 +2813,6 @@ def __call__(
28132813
encoder_hidden_states: torch.Tensor,
28142814
attention_mask: Optional[torch.Tensor] = None,
28152815
image_rotary_emb: Optional[torch.Tensor] = None,
2816-
**kwargs,
28172816
) -> torch.Tensor:
28182817
text_seq_length = encoder_hidden_states.size(1)
28192818

@@ -2885,7 +2884,6 @@ def __call__(
28852884
encoder_hidden_states: torch.Tensor,
28862885
attention_mask: Optional[torch.Tensor] = None,
28872886
image_rotary_emb: Optional[torch.Tensor] = None,
2888-
**kwargs,
28892887
) -> torch.Tensor:
28902888
text_seq_length = encoder_hidden_states.size(1)
28912889

0 commit comments

Comments
 (0)