feat: add MiniMax LLM provider support#883
Conversation
Add full MiniMax provider support across the entire stack: Backend: - Add MINIMAX to LiteLLMProvider enum in db.py - Add MINIMAX mapping to all provider_map dicts in llm_service.py, llm_router_service.py, and llm_config.py - Add Alembic migration (rev 106) for PostgreSQL enum - Add MiniMax M2.5 example in global_llm_config.example.yaml Frontend: - Add MiniMax to LLM_PROVIDERS enum with apiBase - Add MiniMax-M2.5 and MiniMax-M2.5-highspeed to LLM_MODELS - Add MINIMAX to Zod validation schema - Add MiniMax SVG icon and wire up in provider-icons Docs: - Add MiniMax setup guide in chinese-llm-setup.md MiniMax uses an OpenAI-compatible API (https://api.minimax.io/v1) with models supporting up to 204K context window. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Review by RecurseML
🔍 Review performed on 49d8f41..760aa38
✨ No bugs found, your code is sparkling clean
✅ Files analyzed, no issues (13)
• docs/chinese-llm-setup.md
• surfsense_backend/alembic/versions/106_add_minimax_to_litellmprovider_enum.py
• surfsense_backend/app/agents/new_chat/llm_config.py
• surfsense_backend/app/config/global_llm_config.example.yaml
• surfsense_backend/app/db.py
• surfsense_backend/app/services/llm_router_service.py
• surfsense_backend/app/services/llm_service.py
• surfsense_web/components/icons/providers/index.ts
• surfsense_web/components/icons/providers/minimax.svg
• surfsense_web/contracts/enums/llm-models.ts
• surfsense_web/contracts/enums/llm-providers.ts
• surfsense_web/contracts/types/new-llm-config.types.ts
• surfsense_web/lib/provider-icons.tsx
Summary
This PR adds complete MiniMax LLM provider support to SurfSense, enabling users to use MiniMax's M2.5 series models (with 204K context window) through the existing LiteLLM integration.
Changes
Backend:
MINIMAXtoLiteLLMProviderenumMiniMax-M2.5,MiniMax-M2.5-highspeed)Frontend:
High-level PR Summary
This PR adds MiniMax LLM provider support to SurfSense, enabling integration with MiniMax's M2.5 series models that offer a 204K context window. The implementation follows the existing LiteLLM integration pattern by adding the
MINIMAXprovider to the backend enum, creating a database migration, configuring MiniMax-specific models (MiniMax-M2.5andMiniMax-M2.5-highspeed) with OpenAI-compatible routing, and updating the frontend UI with provider options, model selections, and branding assets. Documentation in Chinese is also updated with comprehensive MiniMax configuration instructions including API key setup, available models, and usage recommendations.⏱️ Estimated Review Time: 15-30 minutes
💡 Review Order Suggestion
surfsense_backend/app/db.pysurfsense_backend/alembic/versions/106_add_minimax_to_litellmprovider_enum.pysurfsense_web/contracts/types/new-llm-config.types.tssurfsense_web/contracts/enums/llm-providers.tssurfsense_web/contracts/enums/llm-models.tssurfsense_backend/app/agents/new_chat/llm_config.pysurfsense_backend/app/services/llm_router_service.pysurfsense_backend/app/services/llm_service.pysurfsense_web/components/icons/providers/minimax.svgsurfsense_web/components/icons/providers/index.tssurfsense_web/lib/provider-icons.tsxsurfsense_backend/app/config/global_llm_config.example.yamldocs/chinese-llm-setup.md