Skip to content

Commit 7f84b00

Browse files
committed
make style
1 parent bc71e63 commit 7f84b00

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

src/diffusers/models/attention_processor.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -612,10 +612,9 @@ def batch_to_head_dim(self, tensor: torch.Tensor) -> torch.Tensor:
612612

613613
def head_to_batch_dim(self, tensor: torch.Tensor, out_dim: int = 3) -> torch.Tensor:
614614
r"""
615-
Reshape the tensor from `[batch_size, seq_len, dim]` to
616-
`[batch_size, seq_len, heads, dim // heads]` for out_dim==4
617-
or `[batch_size * heads, seq_len, dim // heads]` for out_dim==3
618-
where `heads` is the number of heads initialized while constructing the `Attention` class.
615+
Reshape the tensor from `[batch_size, seq_len, dim]` to `[batch_size, seq_len, heads, dim // heads]` for
616+
out_dim==4 or `[batch_size * heads, seq_len, dim // heads]` for out_dim==3 where `heads` is the number of heads
617+
initialized while constructing the `Attention` class.
619618
620619
Args:
621620
tensor (`torch.Tensor`): The tensor to reshape.

0 commit comments

Comments
 (0)