You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[docs] slight edits to the attention backends docs. (huggingface#12394)
* slight edits to the attention backends docs.
* Update docs/source/en/optimization/attention_backends.md
Co-authored-by: Steven Liu <[email protected]>
---------
Co-authored-by: Steven Liu <[email protected]>
Copy file name to clipboardExpand all lines: docs/source/en/optimization/attention_backends.md
+10-2Lines changed: 10 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ specific language governing permissions and limitations under the License. -->
11
11
12
12
# Attention backends
13
13
14
-
> [!TIP]
14
+
> [!NOTE]
15
15
> The attention dispatcher is an experimental feature. Please open an issue if you have any feedback or encounter any problems.
16
16
17
17
Diffusers provides several optimized attention algorithms that are more memory and computationally efficient through it's *attention dispatcher*. The dispatcher acts as a router for managing and switching between different attention implementations and provides a unified interface for interacting with them.
@@ -33,7 +33,7 @@ The [`~ModelMixin.set_attention_backend`] method iterates through all the module
33
33
34
34
The example below demonstrates how to enable the `_flash_3_hub` implementation for FlashAttention-3 from the [kernel](https://github.com/huggingface/kernels) library, which allows you to instantly use optimized compute kernels from the Hub without requiring any setup.
35
35
36
-
> [!TIP]
36
+
> [!NOTE]
37
37
> FlashAttention-3 is not supported for non-Hopper architectures, in which case, use FlashAttention with `set_attention_backend("flash")`.
38
38
39
39
```py
@@ -78,10 +78,16 @@ with attention_backend("_flash_3_hub"):
78
78
image = pipeline(prompt).images[0]
79
79
```
80
80
81
+
> [!TIP]
82
+
> Most attention backends support `torch.compile` without graph breaks and can be used to further speed up inference.
83
+
81
84
## Available backends
82
85
83
86
Refer to the table below for a complete list of available attention backends and their variants.
84
87
88
+
<details>
89
+
<summary>Expand</summary>
90
+
85
91
| Backend Name | Family | Description |
86
92
|--------------|--------|-------------|
87
93
|`native`|[PyTorch native](https://docs.pytorch.org/docs/stable/generated/torch.nn.attention.SDPBackend.html#torch.nn.attention.SDPBackend)| Default backend using PyTorch's scaled_dot_product_attention |
@@ -104,3 +110,5 @@ Refer to the table below for a complete list of available attention backends and
0 commit comments