-
Notifications
You must be signed in to change notification settings - Fork 2.5k
fix: handle <think> tags in OpenAI-compatible provider responses #7617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add XmlMatcher to parse <think> tags in streaming responses - Convert <think> content to reasoning chunks for proper UI rendering - Add comprehensive tests for various <think> tag scenarios - Fix position parameter to allow matching tags anywhere in content Fixes #7615
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.
| type: chunk.matched ? "reasoning" : "text", | ||
| text: chunk.data, | ||
| }) as const, | ||
| Number.MAX_SAFE_INTEGER, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice that we're using Number.MAX_SAFE_INTEGER here to match tags anywhere in the content, but other providers like openai.ts and ollama.ts don't specify a position parameter at all (using the default of 0). Should we standardize this approach across all providers for consistency? Or is there a specific reason why BaseOpenAiCompatibleProvider needs different behavior?
| afterEach(() => { | ||
| vi.restoreAllMocks() | ||
| }) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great comprehensive test coverage! However, I notice we don't have a specific test that verifies the position parameter allows matching tags anywhere in the content (not just at the beginning). Consider adding a test case that validates tags are matched when they appear after position 0 in the content.
| const stream = await this.createStream(systemPrompt, messages, metadata) | ||
|
|
||
| // Initialize XmlMatcher to parse <think>...</think> tags | ||
| // Set position to MAX_SAFE_INTEGER to match tags anywhere in the content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment explains the purpose well, but could we make it clearer why we use Number.MAX_SAFE_INTEGER specifically? Maybe consider using a named constant like MATCH_ANYWHERE_POSITION or explain why this value over alternatives like Infinity?
| vi.restoreAllMocks() | ||
| }) | ||
|
|
||
| describe("<think> tag handling", () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider organizing these tests into nested describe blocks for better structure. For example:
- "edge cases" (empty tags, nested tags)
- "streaming scenarios" (partial tags across chunks)
- "basic functionality" (single tags, multiple tags)
This would make the test suite easier to navigate as it grows.
|
Closing see #7615 (comment) |
Summary
This PR fixes the rendering of
<think>tags in OpenAI-compatible mode. Previously, these tags were displayed as plain text instead of being collapsed into a "Thinking..." spoiler as expected.Problem
When using Roo Code in OpenAI-compatible mode with models that output
<think>tags (like gemini-2.5-flash or gemini-2.5-pro), the thinking content was shown as normal text without the<think>tag, breaking the intended UI behavior.Solution
<think>tags in streaming responses from OpenAI-compatible providers<think>content to reasoning chunks for proper UI renderingTesting
Added comprehensive test suite with 10 tests covering:
<think>tag parsing<think>tags<think>blocks<think>tagsImpact
This fix benefits all 8 providers that extend BaseOpenAiCompatibleProvider:
Fixes #7615
Important
Fixes
<think>tag rendering in OpenAI-compatible mode by usingXmlMatcherto parse and convert them into reasoning chunks.<think>tags in OpenAI-compatible mode by collapsing them into "Thinking..." spoilers.XmlMatcherinBaseOpenAiCompatibleProviderto parse<think>tags and convert them to reasoning chunks.XmlMatcherposition parameter to match tags anywhere in content.base-openai-compatible-provider.spec.tswith 10 tests for<think>tag parsing, nested tags, partial tags, content without tags, multiple blocks, and empty tags.BaseOpenAiCompatibleProvider, includingFireworksHandler,IOIntelligenceHandler, andChutesHandler.This description was created by
for 0cd73fd. You can customize this summary. It will automatically update as commits are pushed.