-
Notifications
You must be signed in to change notification settings - Fork 2.6k
fix: enable reasoning display for DeepSeek V3 models with reasoning effort #7371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ffort - Updated OpenAI provider to detect DeepSeek V3/chat models when reasoning is enabled - DeepSeek V3.1 models now properly show reasoning/thinking sections - Added test coverage for DeepSeek V3 reasoning scenarios Fixes #7370
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.
| (modelId.toLowerCase().includes("deepseek") && reasoning) || | ||
| modelId.includes("deepseek-reasoner") || | ||
| enabledR1Format | ||
| const deepseekReasoner = isDeepSeekWithReasoning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable naming here could be clearer. We have isDeepSeekWithReasoning and deepseekReasoner which seem redundant. Could we simplify to use just one consistent variable name?
| const deepseekReasoner = modelId.includes("deepseek-reasoner") || enabledR1Format | ||
| // Check if this is a DeepSeek model with reasoning enabled | ||
| const isDeepSeekWithReasoning = | ||
| (modelId.toLowerCase().includes("deepseek") && reasoning) || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this intentional to keep the detection logic inline? Consider extracting modelId.toLowerCase().includes("deepseek") && reasoning into a helper method like isDeepSeekModelWithReasoning() for better readability and potential reuse.
| expect(callArgs.temperature).toBe(0.6) | ||
| }) | ||
|
|
||
| it("should detect DeepSeek V3 models with reasoning effort as reasoning models", async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests look good, but what about edge cases like "DeepSeek-V3" (uppercase) or "deepseek-v3.1"? Should we add tests to ensure the case-insensitive matching works correctly for all variations?
| // which combines system and user messages | ||
| expect(callArgs.messages[0].role).toBe("user") | ||
| expect(callArgs.messages[0].content).toContain("You are a helpful assistant.") | ||
| expect(callArgs.reasoning_effort).toBe("medium") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we also verify that there's no separate system message in the converted messages? This would make the R1 format usage more explicit:
| const deepseekReasoner = modelId.includes("deepseek-reasoner") || enabledR1Format | ||
| // Check if this is a DeepSeek model with reasoning enabled | ||
| const isDeepSeekWithReasoning = | ||
| (modelId.toLowerCase().includes("deepseek") && reasoning) || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue comment mentions GLM-4.5 has the same problem. Should we consider adding similar detection for GLM models in this PR, or would that be better as a separate fix to keep this PR focused?
|
Issue needs scoping |
Summary
This PR fixes the issue where DeepSeek V3.1's reasoning/thinking section doesn't appear when reasoning effort is enabled through the OpenAI Compatible provider.
Problem
When using DeepSeek V3.1 with the OpenAI Compatible provider and enabling reasoning effort, the thinking/reasoning section was not visible in the UI, even though the model supports reasoning.
Solution
Updated the OpenAI provider to properly detect DeepSeek V3/chat models when reasoning is enabled. The fix checks if:
When both conditions are met, the model is treated as a reasoning model and uses the R1 format, which properly displays the reasoning content.
Changes
src/api/providers/openai.tsto detect DeepSeek models with reasoning enabledTesting
Related Issues
Fixes #7370
Notes
This fix may also address similar issues with other models like GLM-4.5 mentioned in the issue comments, though specific testing for GLM-4.5 would be needed to confirm.
Feedback
This PR attempts to address Issue #7370. Feedback and guidance are welcome!
Important
Fixes DeepSeek V3 models to display reasoning content when reasoning effort is enabled by updating detection logic and using R1 format.
OpenAiHandlerinopenai.tsto detect DeepSeek models with reasoning enabled and use R1 format.openai.spec.tsto verify DeepSeek V3 and DeepSeek-chat models use R1 format and passreasoning_effortcorrectly.This description was created by
for 92c1e3b. You can customize this summary. It will automatically update as commits are pushed.