Commit 928d08a
committed
Update base for Update on " [ExecuTorch][BE] Split kv cache and SDPA for better code sharing"
Summary:
Why?
We have coupled SDPA with kv cache for a while. Initially this was done
as we implemented sdpa_with_kv_cache custom op to reduce multiple copy
overheads from kv cache update. (This could have been done by having
separate custom kv cache update and custom sdpa op. Recent changes
enabled this.)
As a result of SDPA module owning kv cache, we get a) non-composable
implementation and b) harder to reuse model definition and components
from repos like tune. Output of this is that we have multiple definition
of the same model, llama, lying around in ET, TorchChat and Tune. This
diff and subsequent ones will try to move in the direction where custom
kv cache and custom sdpa become decoupled and composable, making it more
module-swap friendly with tune's model definition.
How.
Earlier PRs decoupled kv cache update from sdpa. So now
1. Decouple SDPA nn.Module from KV cache.
2. Standardize on KVCache and SDPA interface. That is KVCache and SDPA
both operate on q, k, v in [B, # heads, seq_len, head_dim] formatted
tensors.
3. 2 will introduce multiple tranposes when KVCache and SDPA are
replaced by custom modules, but we will write graph pass to undo
those.
Test Plan:
Existing tests.
Make sure perf doesnt regress
Differential Revision: [D67914054](https://our.internmc.facebook.com/intern/diff/D67914054)
[ghstack-poisoned]1 parent fb74693 commit 928d08a
File tree
0 file changed
+0
-0
lines changed0 file changed
+0
-0
lines changed
0 commit comments