Skip to content

feat: add ModelScope as a new LLM provider#2150

Open
liyidd wants to merge 1 commit intostackblitz-labs:mainfrom
liyidd:add-provider
Open

feat: add ModelScope as a new LLM provider#2150
liyidd wants to merge 1 commit intostackblitz-labs:mainfrom
liyidd:add-provider

Conversation

@liyidd
Copy link
Copy Markdown

@liyidd liyidd commented Mar 27, 2026

feat: add ModelScope as a new LLM provider

Summary

Adds ModelScope (魔搭社区) as a high-performance LLM provider to Bolt.diy, focusing on the latest Qwen3 series and MoE (Mixture of Experts) architectures optimized for mainland China connectivity and developer efficiency.

Changes

New Provider (1)

  • ModelScope - Leading open-source model hub in China, providing low-latency access to state-of-the-art Qwen3 models via an OpenAI-compatible API.

Model Enhancements

Integrated the latest Qwen3 flagship models via ModelScope's infrastructure:

  • Qwen3-235B-A22B (MoE): The current SOTA for coding and reasoning, featuring an efficient MoE architecture that balances deep intelligence with responsive inference.
  • Qwen3-32B: High-performance balanced model, ideal for rapid prototyping and medium-scale code analysis.
  • Qwen3-14B: Ultra-fast coding assistant, optimized for real-time auto-completion and simple refactoring tasks.

Total Models Added

  • New Provider: 3 core Qwen3 variants (Expandable via dynamic model discovery).

Coding Model Performance

Model Performance Provider
Qwen3-235B-A22B SOTA Coding & MoE reasoning, 128K+ context ModelScope
Qwen3-32B Excellent logic-to-latency ratio ModelScope
Qwen3-14B Fastest response for localized dev loops ModelScope

Technical Details

  • Architecture: Implemented ModelScopeProvider using the standard OpenAI-compatible protocol at api.modelscope.cn/v1.
  • Environment Variables: Added MODELSCOPE_API_KEY to support authenticated requests.
  • Discovery: Supports both a static model list for core Qwen3 models and dynamic model discovery via the /v1/models endpoint.
  • Registry: Auto-registered through LLMManager following the project's existing provider pattern.

Test Plan

  • Type checking passed: Verified against internal TS definitions (resolving prior build/server conflicts).
  • Linting passed: Fixed @blitz/lines-around-comment and Prettier indentation errors.
  • Provider Export: Successfully exported from the provider registry.
  • Documentation: Updated .env.example with the new environment variable.

Coding Focus

This PR enhances Bolt.diy's accessibility for developers in regions with restricted global LLM access by adding:

  • Optimized Connectivity: Low-latency endpoints via ModelScope's China-based infrastructure.
  • Next-Gen Qwen Support: Direct access to Alibaba's latest Qwen3 open-source weights.
  • Large Context Handling: Fully supports Qwen3's extended context window for analyzing complex project structures.

Breaking Changes

  • None — This is a purely additive change.

Related Issues

  • None

Add ModelScope (modelscope.cn) as a provider for accessing open-source
models including Qwen/Qwen3-14B, Qwen/Qwen3-32B, and Qwen/Qwen3-235B-A22B
via an OpenAI-compatible API at api.modelscope.cn/v1.

Supports both static model list and dynamic model discovery via the
/v1/models endpoint.
@liyidd liyidd marked this pull request as draft March 27, 2026 09:46
@liyidd liyidd marked this pull request as ready for review March 27, 2026 09:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant