feat(providers): add MiniMax AI provider support#1028
Open
ximiximi423 wants to merge 4 commits intoItzCrazyKns:masterfrom
Open
feat(providers): add MiniMax AI provider support#1028ximiximi423 wants to merge 4 commits intoItzCrazyKns:masterfrom
ximiximi423 wants to merge 4 commits intoItzCrazyKns:masterfrom
Conversation
Add MiniMax as a new LLM provider, enabling users to use MiniMax models through the OpenAI-compatible API endpoint. Made-with: Cursor
Contributor
There was a problem hiding this comment.
2 issues found across 3 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/lib/models/providers/minimax/minimaxLLM.ts">
<violation number="1" location="src/lib/models/providers/minimax/minimaxLLM.ts:75">
P2: Streaming think-tag stripping is not chunk-boundary safe: when a `</think>` is split across chunks, the partial delimiter is discarded and `insideThinkTag` stays true, suppressing all subsequent output.</violation>
<violation number="2" location="src/lib/models/providers/minimax/minimaxLLM.ts:86">
P2: streamText drops toolCallChunk updates when no visible output is emitted, so function-call deltas can be lost if a chunk contains only tool call data.</violation>
</file>
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Add one-off context when rerunning by tagging
@cubic-dev-aiwith guidance or docs links (includingllms.txt) - Ask questions if you need clarification on any suggestion
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
…olCallChunk - Fix streaming think-tag stripping to handle tags split across chunk boundaries - Preserve toolCallChunk updates when no visible text output is emitted Made-with: Cursor
Contributor
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/lib/models/providers/minimax/minimaxLLM.ts">
<violation number="1" location="src/lib/models/providers/minimax/minimaxLLM.ts:89">
P2: Trailing content can be dropped because the buffered tail is never flushed when the stream ends; on `chunk.done`, the remaining buffer is discarded and the final characters are lost.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
xkonjin
reviewed
Mar 8, 2026
xkonjin
left a comment
There was a problem hiding this comment.
Quick review pass:
- Main risk area here is auth/session state and stale credential handling.
- I didn’t see targeted regression coverage in the diff; please add or point CI at a focused test for the changed path in index.ts, index.ts, minimaxLLM.ts.
- Before merge, I’d smoke-test the behavior touched by index.ts, index.ts, minimaxLLM.ts with malformed input / retry / rollback cases, since that’s where this class of change usually breaks.
- Fix trailing content being dropped when stream ends by flushing buffer - Discard incomplete think tag content at stream end - Add vitest testing framework with 18 unit tests covering: - Think tag stripping (single/multiple/multiline) - Chunk boundary handling for split tags - Buffer flushing on stream end - toolCallChunk preservation - Edge cases and error scenarios Addresses review feedback from PR ItzCrazyKns#1028 Made-with: Cursor
Author
|
Thanks for the review @xkonjin ! I have added the fix for buffer flushing on stream end and unit tests covering think-tag stripping, chunk boundary handling, and toolCallChunk preservation. Please review it when you have a moment. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add MiniMax as a new LLM provider, enabling users to use MiniMax models through the OpenAI-compatible API endpoint.
Summary by cubic
Added MiniMax as a new chat provider via an OpenAI-compatible endpoint. Streaming now strips tags across chunk boundaries, flushes buffered output on stream end, and preserves tool call updates; added
vitesttests with coverage.New Features
vitestand@vitest/coverage-v8with test scripts (test,test:run,test:coverage) and 18 unit tests for streaming logic.Bug Fixes
Written for commit 960b426. Summary will update on new commits.