Skip to content

Commit e776168

Browse files
committed
cont : restore padding for n_kv tensor shape
1 parent e28cec3 commit e776168

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

src/llama-kv-cache.cpp

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -957,10 +957,14 @@ bool llama_kv_cache::get_has_shift() const {
957957
uint32_t llama_kv_cache::get_n_kv(const slot_info & sinfo) const {
958958
uint32_t result = 0;
959959

960+
// pad the n_kv value so that the graph remains constant across batches and can be reused
961+
// note: this also helps some backends with performance (f.ex https://github.com/ggml-org/llama.cpp/pull/16812#issuecomment-3455112220)
962+
const uint32_t n_pad_cur = std::max(n_pad, 256u);
963+
960964
for (uint32_t s = 0; s < sinfo.n_stream(); ++s) {
961965
const auto & cells = v_cells[sinfo.strm[s]];
962966

963-
result = std::max(std::min(cells.size(), std::max(n_pad, GGML_PAD(cells.used_max_p1(), n_pad))), result);
967+
result = std::max(std::min(cells.size(), std::max(n_pad_cur, GGML_PAD(cells.used_max_p1(), n_pad_cur))), result);
964968
}
965969

966970
return result;

0 commit comments

Comments
 (0)