Skip to content

Commit 2b5f4be

Browse files
sayakpaulstevhliu
andauthored
Update docs/source/en/optimization/attention_backends.md
Co-authored-by: Steven Liu <[email protected]>
1 parent eb3e88c commit 2b5f4be

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/en/optimization/attention_backends.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ with attention_backend("_flash_3_hub"):
7979
```
8080

8181
> [!TIP]
82-
> Most of these attention backends come with `torch.compile` compatibility without any graph breaks. Consider using it for maximum speedups.
82+
> Most attention backends support `torch.compile` without graph breaks and can be used to further speed up inference.
8383
8484
## Available backends
8585

0 commit comments

Comments
 (0)