We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent eb3e88c commit 2b5f4beCopy full SHA for 2b5f4be
docs/source/en/optimization/attention_backends.md
@@ -79,7 +79,7 @@ with attention_backend("_flash_3_hub"):
79
```
80
81
> [!TIP]
82
-> Most of these attention backends come with `torch.compile` compatibility without any graph breaks. Consider using it for maximum speedups.
+> Most attention backends support `torch.compile` without graph breaks and can be used to further speed up inference.
83
84
## Available backends
85
0 commit comments