Skip to content

Commit 89abb6f

Browse files
author
ssjia
committed
[ET-VK] Implement SDPA with fused ops
## Context As title; optimize the SDPA operator by introducing shaders to perform the operation in 3 steps: 1. Compute attention weights, multiplying QT x K_cache, and applying scale and mask 2. Compute softmax normalization of computed attention weights 3. Compute final output by multiplying attention weights with V cache This new implementation is much more efficient than the existing one, which performed slicing, repeat_interleave, and transposition of projected and cache tensors as separate steps. The fusion of scale and mask with the computation of attention weights also allows for the computation of elements within the mask region to be skipped. ## Impact Decode latency for LLMs is much improved. For llama 3.2 3B generating ~250 tokens, decode latency increases from ~15 tok/s to ~21.5 tok/s Differential Revision: [D82053493](https://our.internmc.facebook.com/intern/diff/D82053493/) [ghstack-poisoned]
1 parent 245630a commit 89abb6f

30 files changed

+2080
-1547
lines changed

.github/workflows/pull.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -971,6 +971,10 @@ jobs:
971971
./cmake-out/backends/vulkan/test/custom_ops/q4gsw_linear
972972
./cmake-out/backends/vulkan/test/custom_ops/choose_qparams_per_row
973973
974+
# "Classic" Operator tests
975+
PYTHON_EXECUTABLE=python bash backends/vulkan/test/scripts/test_op.sh --build
976+
./cmake-out/backends/vulkan/test/op_tests/vulkan_sdpa_test
977+
974978
# Run e2e testing for selected operators. More operators will be tested via this
975979
# route in the future.
976980
python -m unittest backends/vulkan/test/test_vulkan_delegate.py -k "*pt2e*"

backends/vulkan/op_registry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -571,7 +571,7 @@ def register_sdpa_with_kv_cache_op():
571571
)
572572
def register_sdpa_ops():
573573
return OpFeatures(
574-
inputs_storage=utils.WIDTH_PACKED_TEXTURE,
574+
inputs_storage=utils.CONTIGUOUS_ANY,
575575
supports_resize=True,
576576
)
577577

backends/vulkan/runtime/graph/ops/glsl/flash_attention_buffer.glsl

Lines changed: 0 additions & 227 deletions
This file was deleted.

0 commit comments

Comments
 (0)