-
Notifications
You must be signed in to change notification settings - Fork 2.4k
feat: implement Minimax provider using Anthropic SDK #9084
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Replaced OpenAI SDK with Anthropic SDK for Minimax provider - Updated MiniMaxHandler to extend BaseProvider and implement Anthropic message streaming - Added support for prompt caching when available - Updated tests to mock Anthropic SDK instead of OpenAI - Maintained backward compatibility with existing API Addresses #9082
Review complete. Found 2 code quality improvements to align with established patterns.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
| if (inputTokens > 0 || outputTokens > 0 || cacheWriteTokens > 0 || cacheReadTokens > 0) { | ||
| // MiniMax pricing (per million tokens): | ||
| // Input: $0.3, Output: $1.2, Cache writes: $0.375, Cache reads: $0.03 | ||
| const inputCost = (inputTokens / 1_000_000) * (info.inputPrice || 0) | ||
| const outputCost = (outputTokens / 1_000_000) * (info.outputPrice || 0) | ||
| const cacheWriteCost = (cacheWriteTokens / 1_000_000) * (info.cacheWritesPrice || 0) | ||
| const cacheReadCost = (cacheReadTokens / 1_000_000) * (info.cacheReadsPrice || 0) | ||
| const totalCost = inputCost + outputCost + cacheWriteCost + cacheReadCost | ||
|
|
||
| yield { | ||
| type: "usage", | ||
| inputTokens: 0, | ||
| outputTokens: 0, | ||
| totalCost, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This duplicates cost calculation logic that exists in calculateApiCostAnthropic(). The Anthropic provider uses this centralized function to ensure consistency across providers. This manual calculation makes the code harder to maintain if pricing logic changes.
| if (inputTokens > 0 || outputTokens > 0 || cacheWriteTokens > 0 || cacheReadTokens > 0) { | |
| // MiniMax pricing (per million tokens): | |
| // Input: $0.3, Output: $1.2, Cache writes: $0.375, Cache reads: $0.03 | |
| const inputCost = (inputTokens / 1_000_000) * (info.inputPrice || 0) | |
| const outputCost = (outputTokens / 1_000_000) * (info.outputPrice || 0) | |
| const cacheWriteCost = (cacheWriteTokens / 1_000_000) * (info.cacheWritesPrice || 0) | |
| const cacheReadCost = (cacheReadTokens / 1_000_000) * (info.cacheReadsPrice || 0) | |
| const totalCost = inputCost + outputCost + cacheWriteCost + cacheReadCost | |
| yield { | |
| type: "usage", | |
| inputTokens: 0, | |
| outputTokens: 0, | |
| totalCost, | |
| } | |
| } | |
| if (inputTokens > 0 || outputTokens > 0 || cacheWriteTokens > 0 || cacheReadTokens > 0) { | |
| const { totalCost } = calculateApiCostAnthropic( | |
| this.getModel().info, | |
| inputTokens, | |
| outputTokens, | |
| cacheWriteTokens, | |
| cacheReadTokens, | |
| ) | |
| yield { | |
| type: "usage", | |
| inputTokens: 0, | |
| outputTokens: 0, | |
| totalCost, | |
| } | |
| } |
Fix it with Roo Code or mention @roomote and request a fix.
|
|
||
| const message = await this.client.messages.create({ | ||
| model, | ||
| max_tokens: 16384, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In completePrompt, max_tokens is hardcoded to 16384. Consider using the model’s maxTokens from getModel() for consistency (or add a comment if intentional).
| const params = getModelParams({ | ||
| format: "anthropic", | ||
| modelId: id, | ||
| model: info, | ||
| settings: this.options, | ||
| }) | ||
|
|
||
| return { | ||
| id, | ||
| info, | ||
| ...params, | ||
| temperature: this.options.modelTemperature ?? MINIMAX_DEFAULT_TEMPERATURE, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temperature should be handled by passing defaultTemperature to getModelParams() rather than overriding after spreading params. This bypasses the centralized parameter handling logic and is inconsistent with other providers like AnthropicHandler.
| const params = getModelParams({ | |
| format: "anthropic", | |
| modelId: id, | |
| model: info, | |
| settings: this.options, | |
| }) | |
| return { | |
| id, | |
| info, | |
| ...params, | |
| temperature: this.options.modelTemperature ?? MINIMAX_DEFAULT_TEMPERATURE, | |
| } | |
| } | |
| const params = getModelParams({ | |
| format: "anthropic", | |
| modelId: id, | |
| model: info, | |
| settings: this.options, | |
| defaultTemperature: MINIMAX_DEFAULT_TEMPERATURE, | |
| }) | |
| return { | |
| id, | |
| info, | |
| ...params, | |
| } |
Fix it with Roo Code or mention @roomote and request a fix.
|
Implementation is OpenAI compatible not Anthropic SDK |
Hi @daniel-lxs |
Description
This PR addresses Issue #9082 by migrating the Minimax provider from OpenAI SDK to Anthropic SDK, as requested. According to Minimax documentation, their models achieve higher accuracy when integrated with the Anthropic SDK.
Changes
Technical Details
The new implementation:
Testing
References
Review Notes
The implementation follows the same patterns as the existing Anthropic provider in the codebase, ensuring consistency and maintainability.
Important
Migrates Minimax provider to Anthropic SDK, adding support for Anthropic-specific features and updating tests.
minimax.ts.minimax.spec.tsto mock Anthropic SDK.MiniMaxHandlerclass inminimax.tsnow uses Anthropic SDK for message creation and token counting.createMessagehandles prompt caching and reasoning blocks.completePromptandcountTokensmethods updated for Anthropic SDK.This description was created by
for a8829f4. You can customize this summary. It will automatically update as commits are pushed.