Skip to content

Commit adbda2f

Browse files
Typo fix.
1 parent 703443a commit adbda2f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_collections/_portal_posts/2025-09-02-improving-triton-flashattention-performance-on-intel-gpu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -206,7 +206,7 @@ This is exactly what happens in the widespread case of [FlashAttention version 2
206206

207207
The FlashAttention v2 Forward pass algorithm in pseudo-code is:
208208

209-
```python {.line-numbers}
209+
```python
210210
# Inputs : Q, K and V are 2D Matrices in Global Memory
211211
def FlashAttention2_forward(Q, K, V):
212212
O = torch.zeros_like(Q, requires_grad=True)

0 commit comments

Comments
 (0)