Commit 1487792
authored
[ET-VK] Implement SDPA with fused ops (#14139)
## Context
As title; optimize the SDPA operator by introducing shaders to perform the operation in 3 steps:
1. Compute attention weights, multiplying QT x K_cache, and applying scale and mask
2. Compute softmax normalization of computed attention weights
3. Compute final output by multiplying attention weights with V cache
This new implementation is much more efficient than the existing one, which performed slicing, repeat_interleave, and transposition of projected and cache tensors as separate steps. The fusion of scale and mask with the computation of attention weights also allows for the computation of elements within the mask region to be skipped.
## Impact
Decode latency for LLMs is much improved. For llama 3.2 3B generating ~250 tokens, decode latency increases from ~15 tok/s to ~21.5 tok/s
Differential Revision: [D82053493](https://our.internmc.facebook.com/intern/diff/D82053493/)1 parent 0b78412 commit 1487792
File tree
30 files changed
+2083
-1547
lines changed- .github/workflows
- backends/vulkan
- runtime/graph/ops
- glsl
- impl
- test
- op_tests
- scripts
30 files changed
+2083
-1547
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
971 | 971 | | |
972 | 972 | | |
973 | 973 | | |
| 974 | + | |
| 975 | + | |
| 976 | + | |
| 977 | + | |
| 978 | + | |
| 979 | + | |
| 980 | + | |
974 | 981 | | |
975 | 982 | | |
976 | 983 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
571 | 571 | | |
572 | 572 | | |
573 | 573 | | |
574 | | - | |
| 574 | + | |
575 | 575 | | |
576 | 576 | | |
577 | 577 | | |
| |||
Lines changed: 0 additions & 227 deletions
This file was deleted.
0 commit comments