Skip to content
This repository was archived by the owner on Sep 4, 2025. It is now read-only.

Commit 520ca38

Browse files
authored
[Hotfix][VLM] Fixing max position embeddings for Pixtral (vllm-project#8399)
1 parent 7de49aa commit 520ca38

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

vllm/transformers_utils/config.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -206,6 +206,8 @@ def recurse_elems(elem: Any):
206206
config_dict["tie_word_embeddings"] = config_dict.get(
207207
"tie_embeddings", False)
208208
config_dict["max_seq_len"] = config_dict.get("max_seq_len", 128_000)
209+
config_dict["max_position_embeddings"] = config_dict.get(
210+
"max_position_embeddings", 128_000)
209211

210212
if config_dict.get("moe") is not None:
211213
config_dict["architectures"] = ["MixtralForCausalLM"]

0 commit comments

Comments
 (0)