Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Commit a356897

Browse files
committed
get rid of model.model
1 parent 1eff939 commit a356897

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchchat/generate.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -795,7 +795,7 @@ def chat(
795795

796796
# This is a hack to get around the fact that different models have different ways to record their max_seq_length and might be wrong
797797
# TODO: unify the max_seq_length config representation.
798-
text_transformer_args = getattr(self.model.model, "config", None)
798+
text_transformer_args = self.model.text_transformer_args
799799
max_seq_length = (
800800
text_transformer_args.max_seq_length if text_transformer_args else 2048
801801
)

0 commit comments

Comments
 (0)