Skip to content

Commit 25a97b1

Browse files
committed
feedback
1 parent bcf58f6 commit 25a97b1

File tree

1 file changed

+9
-0
lines changed

1 file changed

+9
-0
lines changed

docs/source/en/optimization/attention_backends.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@ specific language governing permissions and limitations under the License. -->
1111

1212
# Attention backends
1313

14+
> [!TIP]
15+
> The attention dispatcher is an experimental feature. Please open an issue if you have any feedback or encounter any problems.
16+
1417
Diffusers provides several optimized attention algorithms that are more memory and computationally efficient through it's *attention dispatcher*. The dispatcher acts as a router for managing and switching between different attention implementations and provides a unified interface for interacting with them.
1518

1619
Available attention implementations include the following.
@@ -41,6 +44,12 @@ pipeline = QwenImagePipeline.from_pretrained(
4144
"Qwen/Qwen-Image", torch_dtype=torch.bfloat16, device_map="cuda"
4245
)
4346
pipeline.transformer.set_attention_backend("_flash_3_hub")
47+
48+
prompt = """
49+
cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California
50+
highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain
51+
"""
52+
pipeline(prompt).images[0]
4453
```
4554

4655
To restore the default attention backend, call [`~ModelMixin.reset_attention_backend`].

0 commit comments

Comments
 (0)