-
Notifications
You must be signed in to change notification settings - Fork 949
oss(py): update openai chat page #1450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Mintlify preview ID generated: preview-mdrxyu-1763272929-73a29a9 |
|
Mintlify preview ID generated: preview-mdrxyu-1763273025-8b7a5e7 |
|
|
||
| If you're getting empty responses from reasoning models like `gpt-5-nano`, this is likely due to restrictive token limits. The model uses tokens for internal reasoning and may not have any left for the final output. | ||
|
|
||
| Set `max_tokens=None` or increase the token limit to allow sufficient tokens for both reasoning and output generation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, but I think this covers the situation where they're adjusting it
Perhaps the following?
| Set `max_tokens=None` or increase the token limit to allow sufficient tokens for both reasoning and output generation. | |
| Ensure `max_tokens` is set to `None` or increase the token limit to allow sufficient tokens for both reasoning and output generation. |
Co-authored-by: ccurme <[email protected]>
Co-authored-by: ccurme <[email protected]>
|
Mintlify preview ID generated: preview-mdrxyu-1763388809-b756672 |
Closes #455
Closes #479
Closes #532