Skip to content

Commit b08f22b

Browse files
committed
Update on "Use llm_config instead of args in export_llama functions"
Differential Revision: [D75484927](https://our.internmc.facebook.com/intern/diff/D75484927) [ghstack-poisoned]
2 parents b928cc7 + 11041ae commit b08f22b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/models/llama/model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ def __init__(self, llm_config: Optional[LlmConfig] = None):
9898
checkpoint = torch.load(checkpoint_path, map_location=device, mmap=True)
9999

100100
# If given checkpoint is fairseq, convert to llama checkpoint.
101-
fairseq2_checkpoint = llm_config.base.fairseq2
101+
fairseq2_checkpoint = self.llm_config.base.fairseq2
102102
if fairseq2_checkpoint:
103103
print("Using fairseq2 checkpoint")
104104
checkpoint = convert_to_llama_checkpoint(checkpoint=checkpoint)

0 commit comments

Comments
 (0)