You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Pull Request resolved: #4822
This changes makes torch.ops.llama.sdpa_with_kv_cache batch aware. This is needed for batched sdpa cases, for example llm beam search.
* Makes update_cache update across the batch dimension
As a performance optimization, update_cache implements the following operation
```
k_cache[:, start_pos : start_pos + seq_len, :, :] = k
v_cache[:, start_pos : start_pos + seq_len, :, :] = v
```
as part of the fused sdpa_with_kv_cache op. A naiive export of this code inserts expensive slice-scatter ops. sdpa_with_kv_cache fuses this update with the flash attention op for tensors that follow a predetermined format [batch, length, heads, dim]. This change removes the assumption that batch == 1.
* Makes sdpa_with_kv_cache apply cpu_flash_attention for all batch lines as well.
ExecuTorch-exported Llama models are implemented with a greedy search, so it has not been necessary for this op to be batch-aware. However when working with other models, or when doing LLM beam search, this is no longer true.
Reviewed By: kimishpatel
Differential Revision: D61605316
0 commit comments