Skip to content

Conversation

@roomote
Copy link

@roomote roomote bot commented Nov 4, 2025

This PR attempts to address Issue #9012. Feedback and guidance are welcome.

Problem

GLM-4.6 model reasoning was not working correctly in v3.30.0:

  • Z AI provider's GLM-4.6 could not properly enable reasoning
  • OpenAI Compatible provider's GLM-4.6 showed reasoning but didn't display reasoning content, with reasoning duration always showing 0

Solution

Added GLM-4.6-specific reasoning support by detecting when:

  • The model ID includes "glm-4.6"
  • Reasoning is enabled (enableReasoningEffort: true)
  • The model supports binary reasoning (supportsReasoningBinary: true)

When these conditions are met, the thinking: { type: "enabled" } parameter is added to the API request, which is the vendor-specific format required by Z AI's GLM models.

Changes

  • Added GLM-4.6 detection logic in the OpenAI handler
  • Added thinking parameter for both streaming and non-streaming modes
  • Added comprehensive test coverage for GLM-4.6 reasoning scenarios
  • Added explanatory comment for future maintainers

Testing

  • All existing tests pass
  • Added 4 new test cases covering:
    • GLM-4.6 with reasoning enabled (streaming)
    • GLM-4.6 with reasoning disabled (streaming)
    • GLM-4.6 with reasoning enabled (non-streaming)
    • Non-GLM-4.6 models with reasoning enabled (should not add thinking parameter)

Fixes #9012


Important

Adds GLM-4.6 reasoning support to OpenAI Compatible provider by introducing thinking parameter in OpenAiHandler.

  • Behavior:
    • Adds GLM-4.6 reasoning support in OpenAiHandler by checking for glm-4.6 in model ID, enableReasoningEffort: true, and supportsReasoningBinary: true.
    • Adds thinking: { type: "enabled" } parameter to API requests when conditions are met.
  • Testing:
    • Adds 4 new test cases in openai.spec.ts for GLM-4.6 reasoning scenarios, covering both streaming and non-streaming modes.
    • Ensures thinking parameter is not added for non-GLM-4.6 models even with reasoning enabled.

This description was created by Ellipsis for c0d0e13. You can customize this summary. It will automatically update as commits are pushed.

- Added logic to detect GLM-4.6 models and enable thinking parameter when reasoning is enabled
- The thinking parameter is now properly added for both streaming and non-streaming modes
- Added comprehensive tests to verify GLM-4.6 reasoning functionality
- This fixes the issue where GLM-4.6 reasoning was not working with OpenAI Compatible provider

Fixes #9012
Added comment explaining why GLM-4.6 uses thinking parameter instead of reasoning_effort for better maintainability
@roomote roomote bot requested review from cte, jr and mrubens as code owners November 4, 2025 10:42
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Nov 4, 2025
@roomote
Copy link
Author

roomote bot commented Nov 4, 2025

See this task on Roo Code Cloud

Review completed. Found 1 issue that should be addressed before merging.

  • Fix potential parameter conflict between thinking and reasoning_effort parameters

Mention @roomote in a comment to trigger your PR Fixer agent and make changes to this pull request.

@dosubot dosubot bot added the bug Something isn't working label Nov 4, 2025
@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Nov 4, 2025
messages: convertedMessages,
stream: true as const,
...(isGrokXAI ? {} : { stream_options: { include_usage: true } }),
...(reasoning && reasoning),
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential parameter conflict: The thinking parameter (added for GLM-4.6 on lines 178-180) could coexist with reasoning_effort from the spread reasoning object on this line. If a GLM-4.6 model has both supportsReasoningBinary: true and supportsReasoningEffort: true configured, both parameters would be sent to the API. Since the PR description states GLM-4.6 uses thinking instead of reasoning_effort, consider excluding the reasoning object when adding the thinking parameter to prevent both from being present simultaneously.

Fix it with Roo Code or mention @roomote and request a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: Triage

3 participants