Skip to content

Commit cfe3921

Browse files
committed
make style
1 parent 9d452dc commit cfe3921

File tree

2 files changed

+1
-3
lines changed

2 files changed

+1
-3
lines changed

src/diffusers/models/transformers/latte_transformer_3d.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,14 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15-
from typing import Dict, Optional, Union
15+
from typing import Optional
1616

1717
import torch
1818
from torch import nn
1919

2020
from ...configuration_utils import ConfigMixin, register_to_config
2121
from ...models.embeddings import PixArtAlphaTextProjection, get_1d_sincos_pos_embed_from_grid
2222
from ..attention import BasicTransformerBlock
23-
from ..attention_processor import AttentionProcessor
2423
from ..embeddings import PatchEmbed
2524
from ..modeling_outputs import Transformer2DModelOutput
2625
from ..modeling_utils import ModelMixin

src/diffusers/pipelines/pyramid_attention_broadcast_utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,6 @@ def apply_pyramid_attention_broadcast(
150150
>>> apply_pyramid_attention_broadcast(pipe, config)
151151
```
152152
"""
153-
# We present Pyramid Attention Broadcast (PAB), a real-time, high quality and training-free approach for DiT-based video generation. Our method is founded on the observation that attention difference in the diffusion process exhibits a U-shaped pattern, indicating significant redundancy. We mitigate this by broadcasting attention outputs to subsequent steps in a pyramid style. It applies different broadcast strategies to each attention based on their variance for best efficiency. We further introduce broadcast sequence parallel for more efficient distributed inference. PAB demonstrates superior results across three models compared to baselines, achieving real-time generation for up to 720p videos. We anticipate that our simple yet effective method will serve as a robust baseline and facilitate future research and application for video generation.
154153
if config is None:
155154
config = PyramidAttentionBroadcastConfig()
156155

0 commit comments

Comments
 (0)