Skip to content

Commit 5d61242

Browse files
committed
Update on " [ExecuTorch][BE] Split kv cache and SDPA for better code sharing"
Summary: Why? We have coupled SDPA with kv cache for a while. Initially this was done as we implemented sdpa_with_kv_cache custom op to reduce multiple copy overheads from kv cache update. (This could have been done by having separate custom kv cache update and custom sdpa op. Recent changes enabled this.) As a result of SDPA module owning kv cache, we get a) non-composable implementation and b) harder to reuse model definition and components from repos like tune. Output of this is that we have multiple definition of the same model, llama, lying around in ET, TorchChat and Tune. This diff and subsequent ones will try to move in the direction where custom kv cache and custom sdpa become decoupled and composable, making it more module-swap friendly with tune's model definition. How. Earlier PRs decoupled kv cache update from sdpa. So now 1. Decouple SDPA nn.Module from KV cache. 2. Standardize on KVCache and SDPA interface. That is KVCache and SDPA both operate on q, k, v in [B, # heads, seq_len, head_dim] formatted tensors. 3. 2 will introduce multiple tranposes when KVCache and SDPA are replaced by custom modules, but we will write graph pass to undo those. Test Plan: Existing tests. Make sure perf doesnt regress Differential Revision: [D67914054](https://our.internmc.facebook.com/intern/diff/D67914054) [ghstack-poisoned]
2 parents 64ab3f5 + 3c2d80c commit 5d61242

File tree

1 file changed

+9
-6
lines changed

1 file changed

+9
-6
lines changed

examples/models/llama/source_transformation/quantized_kv_cache.py

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -98,8 +98,8 @@ def update(self, input_pos, k_val, v_val):
9898
However the storage is [B, S, H, D] so we incur transpose in, transpose out
9999
This shall be removed by subsequent post-export graph pass
100100
"""
101-
k_val = k_val.transpose(1, 2)
102-
v_val = v_val.transpose(1, 2)
101+
k_val = k_val.transpose(1, 2).contiguous()
102+
v_val = v_val.transpose(1, 2).contiguous()
103103
# quantize current k_val and store it in the cache
104104
quantized_k_val, k_scales, k_zero_points = self._quantize(k_val)
105105

@@ -152,7 +152,7 @@ def update(self, input_pos, k_val, v_val):
152152
k_out[:, input_pos] = k_val
153153
v_out[:, input_pos] = v_val
154154

155-
return k_out.transpose(1, 2), v_out.transpose(1, 2)
155+
return k_out.transpose(1, 2).contiguous(), v_out.transpose(1, 2).contiguous()
156156

157157
@classmethod
158158
def from_float(
@@ -249,12 +249,15 @@ def update(
249249
self, input_pos: torch.Tensor, k_val: torch.Tensor, v_val: torch.Tensor
250250
) -> Tuple[torch.Tensor, torch.Tensor]:
251251
# input_pos: [S], k_val: [B, H, S, D]
252-
k_val = k_val.transpose(1, 2)
253-
v_val = v_val.transpose(1, 2)
252+
k_val = k_val.transpose(1, 2).contiguous()
253+
v_val = v_val.transpose(1, 2).contiguous()
254254
start_pos = input_pos[0].item()
255255
_ = torch.ops.llama.update_cache(k_val, self.k_cache, start_pos)
256256
_ = torch.ops.llama.update_cache(v_val, self.v_cache, start_pos)
257-
return self.k_cache.transpose(1, 2), self.v_cache.transpose(1, 2)
257+
return (
258+
self.k_cache.transpose(1, 2).contiguous(),
259+
self.v_cache.transpose(1, 2).contiguous(),
260+
)
258261

259262

260263
def replace_kv_cache_with_custom_kv_cache(module):

0 commit comments

Comments
 (0)