Skip to content

feat: add MiniMax provider support#744

Open
octo-patch wants to merge 1 commit intogoogle:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support#744
octo-patch wants to merge 1 commit intogoogle:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Description

This PR adds a new model/minimax package that implements the model.LLM interface for MiniMax models using the OpenAI-compatible API.

Motivation

MiniMax is a leading AI provider offering high-performance models. Adding native support enables ADK users to build agents powered by MiniMax models with the same clean interface used for Gemini.

Changes

  • New package google.golang.org/adk/model/minimax implementing model.LLM
  • Supports MiniMax-M2.7 (default) and MiniMax-M2.7-highspeed models
  • Converts genai.Content/genai.Part types to/from OpenAI message format, handling:
    • Text parts
    • Function calls → OpenAI tool_calls
    • Function responses → OpenAI tool messages
    • System instructions from GenerateContentConfig
  • Converts genai.Schema to OpenAI-compatible JSON Schema for function declarations
  • Clamps temperature to MiniMax's valid range (0.0, 1.0]
  • Reads API key from MINIMAX_API_KEY environment variable
  • No new external dependencies — uses only net/http and encoding/json

Usage

import "google.golang.org/adk/model/minimax"

m, err := minimax.NewModel("MiniMax-M2.7")
// or with options:
m, err := minimax.NewModel("MiniMax-M2.7",
    minimax.WithAPIKey("your-api-key"),
    minimax.WithBaseURL("https://api.minimax.io"),
)

API Reference

Testing Plan

  • Unit tests cover: model creation, text generation, tool use, function call/response conversion, schema conversion, temperature clamping, error handling, and custom base URL
  • Integration test verified against the live MiniMax API: model returns correct text response with FinishReason: STOP
  • All 13 unit tests pass: go test ./model/minimax/... -v

- Add MiniMax chat model provider using OpenAI-compatible API
- Support MiniMax-M2.7 and MiniMax-M2.7-highspeed models
- Convert genai Content/Part types to/from OpenAI message format
- Handle text, function call, and function response parts
- Convert genai.Schema to OpenAI-compatible JSON Schema
- Clamp temperature to valid MiniMax range (0.0, 1.0]
- Support system instructions via GenerateContentConfig
- Reads API key from MINIMAX_API_KEY environment variable
- Add unit tests with mock HTTP transport
@google-cla
Copy link
Copy Markdown

google-cla Bot commented Apr 20, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new MiniMax model implementation for the LLM interface, including support for OpenAI-compatible API requests, tool calling, and schema conversion. While the implementation provides a solid foundation, there are several critical issues to address. Specifically, the GenerateContent method currently lacks actual streaming support despite accepting a stream parameter, which will cause JSON parsing failures if streaming is enabled. Additionally, the temperature clamping logic incorrectly maps deterministic requests (0.0) to maximum randomness (1.0), and there is an unhandled error when unmarshaling tool call arguments in the response conversion logic.

Comment thread model/minimax/minimax.go
Comment on lines +136 to +141
func (m *minimaxModel) GenerateContent(ctx context.Context, req *model.LLMRequest, stream bool) iter.Seq2[*model.LLMResponse, error] {
return func(yield func(*model.LLMResponse, error) bool) {
resp, err := m.generate(ctx, req, stream)
yield(resp, err)
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of GenerateContent does not support streaming, although it accepts a stream parameter and returns an iterator. The internal generate method (line 219) performs a blocking io.ReadAll on the response body and returns a single *model.LLMResponse. For streaming to work, the response should be processed as a stream of Server-Sent Events (SSE), and each chunk should be yielded through the iterator. As it stands, if stream is set to true, the API will return an SSE stream which json.Unmarshal (line 254) will fail to parse.

Comment thread model/minimax/minimax.go
Comment on lines +287 to +288
if temp <= 0 {
temp = 1.0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Clamping a temperature of 0.0 to 1.0 is likely incorrect. In LLM APIs, 0.0 usually signifies a request for deterministic output. If MiniMax requires the temperature to be in the range (0.0, 1.0], mapping 0.0 to the maximum randomness (1.0) contradicts the typical user intent. It would be better to clamp 0.0 to a small positive value like 0.01.

Suggested change
if temp <= 0 {
temp = 1.0
if temp <= 0 {
temp = 0.01

Comment thread model/minimax/minimax.go
for _, tc := range msg.ToolCalls {
var args map[string]any
if tc.Function.Arguments != "" {
_ = json.Unmarshal([]byte(tc.Function.Arguments), &args)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error from json.Unmarshal when parsing tool call arguments is ignored. It is better to handle this error to ensure that malformed arguments from the API are caught and reported, rather than returning a FunctionCall with potentially nil or empty arguments.

kirankn8 added a commit to kirankn8/adk-go that referenced this pull request Apr 24, 2026
Rebuilt on upstream's Source-backed skilltoolset (PRs google#744-google#747). Adds
features upstream lacks:

- code_executors/ package (BaseCodeExecutor interface, UnsafeLocalCodeExecutor)
- run_skill_script tool for executing scripts from a skill's scripts/ dir,
  wired via Config.Executor + Config.SkillsRoot (only exposed when both set)
- NewFileSystem helper for constructing a toolset from a directory on disk
- name/skill/skill_name aliases on load_skill
- path/resource_path aliases on load_skill_resource
- did_you_mean enrichment with Levenshtein ranking for mistyped skill names,
  resource paths, and script paths
- path normalization: collapse double prefixes (references/references/foo),
  wrong-prefix redirect (asset/ -> assets/), cross-bucket detection
- redirect codes: USE_LOAD_SKILL for SKILL.md, USE_LOAD_SKILL_RESOURCE for
  doc-like files passed to run_skill_script

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant