Skip to content

fix(volcengine): miss </think> in multiple think round by DeepSeek v3.1 thinking mode#2195

Merged
crazywoola merged 5 commits intolanggenius:mainfrom
leslie2046:fix-think
Dec 8, 2025
Merged

fix(volcengine): miss </think> in multiple think round by DeepSeek v3.1 thinking mode#2195
crazywoola merged 5 commits intolanggenius:mainfrom
leslie2046:fix-think

Conversation

@leslie2046
Copy link
Contributor

@leslie2046 leslie2046 commented Dec 7, 2025

Related Issues or Context

fixes #2185

before:

<think>
当前用户询问的是当前时间,我需要使用可用的工具来获取准确的时间信息。可用的工具是current_time,它可以返回当前时间。这个工具不需要任何参数,直接调用即可。我将使用这个工具来获取当前时间,然后以清晰的方式回复给用户。<think>
当前时间是2025年12月7日 14:29:34。
</think>当前时间是:2025年12月7日 14:29:34
图片

after:

<think>
当前用户询问当前时间,我需要使用current_time工具来获取准确的时间信息。这个工具不需要任何参数,直接调用即可。

调用current_time工具后,会返回当前的时间数据,我可以将结果以清晰易懂的方式呈现给用户。比如显示具体的日期和时间,可能还包括时区信息。

由于这是一个简单的查询,不需要额外的处理或解释,直接返回工具提供的时间信息即可。
</think><think>
当前时间是2025年12月7日晚上10点31分06秒。

我可以直接告诉用户这个准确的时间信息。
</think>当前时间是:**2025年12月7日 22:31:06**
图片

This PR contains Changes to Non-Plugin

  • Documentation
  • Other

This PR contains Changes to Non-LLM Models Plugin

  • I have Run Comprehensive Tests Relevant to My Changes

This PR contains Changes to LLM Models Plugin

  • My Changes Affect Message Flow Handling (System Messages and User→Assistant Turn-Taking)
  • My Changes Affect Tool Interaction Flow (Multi-Round Usage and Output Handling, for both Agent App and Agent Node)
  • My Changes Affect Multimodal Input Handling (Images, PDFs, Audio, Video, etc.)
  • My Changes Affect Multimodal Output Generation (Images, Audio, Video, etc.)
  • My Changes Affect Structured Output Format (JSON, XML, etc.)
  • My Changes Affect Token Consumption Metrics
  • My Changes Affect Other LLM Functionalities (Reasoning Process, Grounding, Prompt Caching, etc.)
  • Other Changes (Add New Models, Fix Model Parameters etc.)

Version Control (Any Changes to the Plugin Will Require Bumping the Version)

  • I have Bumped Up the Version in Manifest.yaml (Top-Level Version Field, Not in Meta Section)

Dify Plugin SDK Version

  • I have Ensured dify_plugin>=0.3.0,<0.6.0 is in requirements.txt (SDK docs)

Environment Verification (If Any Code Changes)

Local Deployment Environment

  • Dify Version is: , I have Tested My Changes on Local Deployment Dify with a Clean Environment That Matches the Production Configuration.

SaaS Environment

  • I have Tested My Changes on cloud.dify.ai with a Clean Environment That Matches the Production Configuration

@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Dec 7, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @leslie2046, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a formatting bug within the Volcengine MAAS plugin, specifically addressing the DeepSeek v3.1 thinking mode. The core issue involved the occasional omission of the "" closing tag, which could lead to malformed output when the LLM was processing multi-round thoughts. By introducing a dedicated helper function to meticulously manage the think tags, the change ensures that reasoning content is always correctly encapsulated. Additionally, the volcengine-python-sdk dependency has been updated to its latest patch version, and the plugin's overall version has been incremented to reflect these improvements.

Highlights

  • Thinking Mode Output Fix: Corrected an issue where the "" closing tag was sometimes missing in the DeepSeek v3.1 thinking mode output, ensuring proper formatting of reasoning content.
  • New Reasoning Content Wrapper: Introduced a new internal utility function, _wrap_thinking_by_reasoning_content, to precisely manage the opening and closing of <think> HTML tags based on the presence of reasoning content in LLM streaming responses.
  • Dependency Update: Updated the volcengine-python-sdk dependency from version 4.0.29 to 4.0.39.
  • Plugin Version Bump: Incremented the Volcengine MAAS plugin version from 0.0.36 to 0.0.37 in manifest.yaml.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the bug Something isn't working label Dec 7, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue with missing </think> tags in the Volcengine provider's thinking mode by introducing new logic to wrap reasoning content. The changes look mostly correct and should fix the reported problem. I've identified a potential data loss bug where response content might be discarded during the reasoning phase, and also suggested a minor simplification to make the new code more readable. The dependency and manifest version bumps are appropriate for this fix.

@leslie2046
Copy link
Contributor Author

CC @hjlarry could you please help me to take a look

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Dec 8, 2025
@crazywoola crazywoola merged commit 1749ee5 into langgenius:main Dec 8, 2025
5 checks passed
@leslie2046 leslie2046 deleted the fix-think branch December 8, 2025 02:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Answer will be all thinking content without answer,if enable thinking for deepseek v3.1(volcengine)

2 participants