Skip to content

feat: add MiniMax as a model provider#2717

Open
octo-patch wants to merge 1 commit intoModelEngine-Group:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as a model provider#2717
octo-patch wants to merge 1 commit intoModelEngine-Group:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class model provider in nexent, following the established provider pattern.

What is MiniMax?

MiniMax is a leading AI company offering powerful LLM and embedding models:

  • MiniMax-M2.7: Latest flagship model with 1M token context window
  • MiniMax-M2.5 / M2.5-highspeed: 204K context window, optimized for speed
  • embo-01: Embedding model with 1536 dimensions

All models are accessible via an OpenAI-compatible API at https://api.minimax.io/v1.

Changes

Backend (5 files):

  • backend/services/providers/minimax_provider.py — New MiniMaxModelProvider with model type classification (LLM, embedding, TTS, STT, reranker, VLM) and known context window sizes
  • backend/consts/provider.py — Add MINIMAX enum, base URL, and models endpoint
  • backend/services/providers/__init__.py — Export MiniMaxModelProvider
  • backend/services/model_provider_service.py — Wire MiniMax into factory
  • backend/services/model_management_service.py — Add MiniMax base URL for batch create

Frontend (5 files):

  • frontend/hooks/model/useMinimaxModelList.ts — Batch import hook (follows DashScope pattern)
  • frontend/app/[locale]/models/components/model/ModelAddDialog.tsx — Add MiniMax to provider dropdown
  • frontend/const/modelConfig.ts — Provider constants (icon, hint, link)
  • frontend/public/locales/{en,zh}/common.json — i18n translations
  • frontend/public/minimax.png — Provider icon

Tests (1 file, 16 tests):

  • test/backend/services/providers/test_minimax_provider.py — Unit tests covering all model types, known context windows, error handling (HTTP errors, connection errors, timeouts), authorization header, and mixed type classification

Test Plan

  • All 16 new MiniMax provider unit tests pass
  • All 102 existing provider tests continue to pass (118 total)
  • Manual verification: batch import MiniMax models with API key
  • Manual verification: MiniMax models appear in model list with correct types

Add MiniMax (https://www.minimaxi.com/) as a first-class model provider
alongside SiliconFlow, DashScope, and TokenPony. MiniMax offers LLM models
(M2.7 with 1M context, M2.5/M2.5-highspeed with 204K context) and the
embo-01 embedding model.

Backend:
- Add MiniMaxModelProvider with model type classification (LLM, embedding,
  TTS, STT, reranker, VLM) and known context window sizes
- Register provider in factory, enum, and base URL constants
- Wire into model_management_service for batch create flow

Frontend:
- Add useMinimaxModelList hook for batch import
- Add MiniMax option in ModelAddDialog provider dropdown
- Add provider constants (icon, hint, link) and i18n translations (en/zh)

Tests:
- Add 16 unit tests covering all model types, error handling, context
  windows, and auth header verification
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
30.3% Duplication on New Code (required ≤ 3%)
D Security Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant