Skip to content

Commit 3a8c238

Browse files
authored
Fix for KeyError on Loading LLaMA (#1978)
1 parent c85b80c commit 3a8c238

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

vllm/model_executor/models/llama.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -322,6 +322,10 @@ def load_weights(self,
322322
model_name_or_path, cache_dir, load_format, revision):
323323
if "rotary_emb.inv_freq" in name:
324324
continue
325+
if "rotary_emb.cos_cached" in name:
326+
continue
327+
if "rotary_emb.sin_cached" in name:
328+
continue
325329
for (param_name, weight_name, shard_id) in stacked_params_mapping:
326330
if weight_name not in name:
327331
continue

0 commit comments

Comments
 (0)