Skip to content

Commit 68f7461

Browse files
committed
Optional attn bias for GLM4
1 parent 6a5d303 commit 68f7461

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

exllamav2/architecture.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -832,7 +832,7 @@ class Params:
832832
"norm_2": ".post_attention_layernorm",
833833
"norm_2_post": ".post_mlp_layernorm",
834834
})
835-
self.lm.attention_bias_qkv = True
835+
self.lm.attention_bias_qkv = read_config.get("attention_bias", False)
836836

837837
# Llama (default + fallback)
838838

0 commit comments

Comments
 (0)