You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Making update_cache update across the batch dimension. (pytorch#4822)
Summary:
Pull Request resolved: pytorch#4822
This is part 1 of a multi-part commit to make torch.ops.llama.sdpa_with_kv_cache batch aware. This is needed for batched sdpa cases, for example llm beam search.
As a performance optimization, update_cache implements the following operation
```
k_cache[:, start_pos : start_pos + seq_len, :, :] = k
v_cache[:, start_pos : start_pos + seq_len, :, :] = v
```
as part of the fused sdpa_with_kv_cache op. A naiive export of this code inserts expensive slice-scatter ops.
ExecuTorch-exported Llama models are implemented with a greedy search, so it has not been necessary for this op to be batch-aware. However when working with other models, or when doing LLM beam search, this code needs to update the cache across the batch dimension.
Differential Revision: D61605316
0 commit comments