-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Description
Self Checks
- I have searched for existing issues search for existing issues, including closed ones.
- I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- Please do not modify this template :) and fill in all the required fields.
1. Is this request related to a challenge you're experiencing? Tell me about your story.
I'm building a chat-flow where multiple LLMs are chained together.
One model (e.g. Qwen-3.0) outputs its reasoning inside
<think>reasoning process...</think>
blocks, while another model (e.g. Gemini-2.0) does not.
When the flow continues to the next node (code-executor, tool parser, etc.),
the raw <think></think> tags and reasoning blocks cause parsing errors and unpredictable behavior.
Maintaining ad-hoc regex cleansers after every LLM node is fragile and error-prone.
2. Additional context or comments
I propose adding a single environment variable called LLM_NODE_THINKING_TAGS_ENABLED.
If you leave it at its default value (true), Dify keeps behaving exactly as it does today and the blocks remain visible.
If you set the variable to false, the LLMNode quietly strips out every <think></think> reasoning segment before the text reaches the next node, so mixed-model flows parse cleanly.
It addresses both sides of the long-running discussion: people who need the reasoning can keep it, and anyone who just wants a clean answer can switch it off with no code changes.
Fixes #13419
Related: #13107, #18556
3. Can you help us with this feature?
- I am interested in contributing to this feature.