Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Commit 49687b6

Browse files
authored
Make default max_seq_length in openai_api consistent to chat
Those two apis should have the same default max_seq_length. There can be a better location to define this. The max_seq_length in chat is 2048, in https://github.com/pytorch/torchchat/blob/main/torchchat/generate.py#L821. The way max_seq_length defined in model looks flaky. It should be consistent to the model definition and the cache setup.
1 parent c454026 commit 49687b6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchchat/usages/openai_api.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -289,7 +289,7 @@ def __init__(self, *args, **kwargs):
289289
)
290290
except:
291291
# can not find max_seq_length in model config, use default value
292-
self.max_seq_length = 128
292+
self.max_seq_length = 2048
293293
# The System fingerprint is a unique identifier for the model and its configuration.
294294
self.system_fingerprint = (
295295
f"{self.builder_args.device}_{self.builder_args.precision}"

0 commit comments

Comments
 (0)