Skip to content

Commit 40e3dfd

Browse files
compiladeNexesenex
authored andcommitted
llama : fix qs.n_attention_wv for DeepSeek-V2 (#9156)
1 parent 1f47f14 commit 40e3dfd

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

src/llama.cpp

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19899,7 +19899,8 @@ static void llama_model_quantize_internal(const std::string & fname_inp, const s
1989919899

1990019900
// TODO: avoid hardcoded tensor names - use the TN_* constants
1990119901
if (name.find("attn_v.weight") != std::string::npos ||
19902-
name.find("attn_qkv.weight") != std::string::npos) {
19902+
name.find("attn_qkv.weight") != std::string::npos ||
19903+
name.find("attn_kv_b.weight")!= std::string::npos) {
1990319904
++qs.n_attention_wv;
1990419905
} else if (name == LLM_TN(model.arch)(LLM_TENSOR_OUTPUT, "weight")) {
1990519906
qs.has_output = true;

0 commit comments

Comments
 (0)