-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat: exclude GPT-5 models from 20% context window output token cap #6963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||
|---|---|---|---|---|---|---|---|---|
|
|
@@ -107,7 +107,17 @@ export const getModelMaxOutputTokens = ({ | |||||||
| } | ||||||||
|
|
||||||||
| // If model has explicit maxTokens, clamp it to 20% of the context window | ||||||||
| // Exception: GPT-5 models should use their exact configured max output tokens | ||||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This GPT-5 exception is a significant behavior change that should be documented in the function's JSDoc. Future maintainers might wonder why GPT-5 models get special treatment. Could we add a note to the function documentation explaining this exception? |
||||||||
| if (model.maxTokens) { | ||||||||
| // Check if this is a GPT-5 model (case-insensitive) | ||||||||
| const isGpt5Model = modelId.toLowerCase().includes("gpt-5") | ||||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The pattern matching here could be more precise. Currently, Could we consider a more specific pattern? Perhaps:
Suggested change
This would match "gpt-5", "gpt-5-turbo", "openai/gpt-5-preview" but not "not-gpt-5" or "gpt-500". |
||||||||
|
|
||||||||
| // GPT-5 models bypass the 20% cap and use their full configured max tokens | ||||||||
| if (isGpt5Model) { | ||||||||
| return model.maxTokens | ||||||||
| } | ||||||||
|
|
||||||||
| // All other models are clamped to 20% of context window | ||||||||
| return Math.min(model.maxTokens, Math.ceil(model.contextWindow * 0.2)) | ||||||||
| } | ||||||||
|
|
||||||||
|
|
||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great test coverage! Though it might be worth adding edge case tests to ensure the pattern matching doesn't have false positives. For example, testing that "not-gpt-5", "gpt-500", or "legacy-gpt-5-incompatible" don't incorrectly bypass the cap.