Skip to content

Commit fb1fff5

Browse files
author
ssjia
committed
Update on "[ET-VK] Implement SDPA with fused ops"
## Context As title; optimize the SDPA operator by introducing shaders to perform the operation in 3 steps: 1. Compute attention weights, multiplying QT x K_cache, and applying scale and mask 2. Compute softmax normalization of computed attention weights 3. Compute final output by multiplying attention weights with V cache This new implementation is much more efficient than the existing one, which performed slicing, repeat_interleave, and transposition of projected and cache tensors as separate steps. The fusion of scale and mask with the computation of attention weights also allows for the computation of elements within the mask region to be skipped. ## Impact Decode latency for LLMs is much improved. For llama 3.2 3B generating ~250 tokens, decode latency increases from ~15 tok/s to ~21.5 tok/s Differential Revision: [D82053493](https://our.internmc.facebook.com/intern/diff/D82053493/) [ghstack-poisoned]
1 parent 89abb6f commit fb1fff5

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

.github/workflows/pull.yml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -973,7 +973,10 @@ jobs:
973973
974974
# "Classic" Operator tests
975975
PYTHON_EXECUTABLE=python bash backends/vulkan/test/scripts/test_op.sh --build
976-
./cmake-out/backends/vulkan/test/op_tests/vulkan_sdpa_test
976+
# TODO(ssjia): figure out how to run custom op tests in CI. Currently, they are
977+
# failing due to to the libstdc++.so.6 installed with conda not supporting
978+
# GLIBCXX_3.4.30. These tests are still run in Meta internal CI.
979+
# ./cmake-out/backends/vulkan/test/op_tests/vulkan_sdpa_test
977980
978981
# Run e2e testing for selected operators. More operators will be tested via this
979982
# route in the future.

0 commit comments

Comments
 (0)