-
Notifications
You must be signed in to change notification settings - Fork 2.6k
fix: enable reasoning visibility for DeepSeek V3.1 and GLM-4.5 models #7372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add supportsReasoningEffort flag to DeepSeek chat and reasoner models - Add supportsReasoningEffort flag to GLM-4.5 and GLM-4.5-air models (both international and mainland) - This enables the reasoning/thinking section visibility for these models when used through OpenAI Compatible provider Fixes #7370
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing my own code because apparently I trust no one, not even myself.
| contextWindow: 128_000, | ||
| supportsImages: false, | ||
| supportsPromptCache: true, | ||
| supportsReasoningEffort: true, // DeepSeek V3.1 supports thinking/reasoning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation looks correct and follows the existing pattern. However, would it be helpful to add test coverage for the supportsReasoningEffort flag? The existing tests in src/api/providers/__tests__/deepseek.spec.ts could verify this capability is properly set.
| contextWindow: 128_000, | ||
| supportsImages: false, | ||
| supportsPromptCache: true, | ||
| supportsReasoningEffort: true, // DeepSeek-R1 supports thinking/reasoning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor suggestion: For consistency, consider standardizing the comment wording across all models. Currently using "DeepSeek V3.1 supports thinking/reasoning" here and "DeepSeek-R1 supports thinking/reasoning" below. Perhaps just "supports reasoning" would be cleaner?
| contextWindow: 131_072, | ||
| supportsImages: false, | ||
| supportsPromptCache: true, | ||
| supportsReasoningEffort: true, // GLM-4.5 supports thinking/reasoning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to see both international and mainland configurations updated consistently. Similar to the DeepSeek models, consider adding test assertions in src/api/providers/__tests__/zai.spec.ts to verify the supportsReasoningEffort flag is properly set for these models.
|
Issue needs scoping |
This PR fixes the reasoning/thinking section visibility issue for DeepSeek V3.1 and GLM-4.5 models when used through OpenAI Compatible provider.
Problem
Solution
supportsReasoningEffort: trueflag to DeepSeek chat and reasoner modelssupportsReasoningEffort: trueflag to all GLM-4.5 model variants (standard and Air versions)Testing
Impact
This change enables users to see the reasoning/thinking process for:
Fixes #7370
cc @AykhanUV - Yes, this fix addresses both DeepSeek V3.1 and GLM-4.5 models as requested.
Important
Adds
supportsReasoningEffort: trueto enable reasoning visibility for DeepSeek V3.1, DeepSeek R1, GLM-4.5, and GLM-4.5-Air models.supportsReasoningEffort: truetodeepSeekModelsindeepseek.tsfor DeepSeek V3.1 and DeepSeek R1 models.supportsReasoningEffort: truetointernationalZAiModelsandmainlandZAiModelsinzai.tsfor GLM-4.5 and GLM-4.5-Air models.This description was created by
for 774afd5. You can customize this summary. It will automatically update as commits are pushed.