Skip to content

Add Qwen3.5 MoE Modeling#2436

Closed
phaelon74 wants to merge 1 commit intovllm-project:mainfrom
phaelon74:Qwen3_5-Modeling
Closed

Add Qwen3.5 MoE Modeling#2436
phaelon74 wants to merge 1 commit intovllm-project:mainfrom
phaelon74:Qwen3_5-Modeling

Conversation

@phaelon74
Copy link
Copy Markdown
Contributor

@phaelon74 phaelon74 commented Mar 4, 2026

SUMMARY:
Adding a modeling file for Qwen3.5 MoE family of models, similar to the GLM4_MOE Modeling file I added.

TEST PLAN:
Calib Test file added and example quanting file added. Quanting on both my RTX6000 and RTX 3090 rigs.

Signed-off-by: phaelon74 <kahlid74@hotmail.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 4, 2026

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the model compression capabilities by integrating full support for Qwen3.5 MoE models. It introduces specialized calibration logic to accurately quantize these complex architectures, ensuring that all experts are properly accounted for during the compression process. The changes also include necessary structural adaptations for efficient deployment, alongside practical examples and validation tests.

Highlights

  • Qwen3.5 MoE Modeling Support: Added comprehensive modeling support for the Qwen3.5 Mixture-of-Experts (MoE) family of models, enabling their integration and optimization within the llmcompressor framework.
  • Calibration Module for MoE Experts: Introduced CalibrationQwen3_5MoeSparseMoeBlock to ensure proper quantization statistics are collected for all experts during calibration by routing all tokens to every expert, addressing the unique challenges of MoE architectures.
  • Expert Tensor Decomposition: Implemented SequentialQwen3_5MoeExperts to permanently decompose fused 3D expert tensors into individual Qwen3_5MoeMLP modules, enhancing compatibility with vLLM serving.
  • Quantization Example and Test: Provided an example script (qwen3_5_moe_example.py) demonstrating AWQ quantization for Qwen3.5 MoE models and added a dedicated test file (test_calib_qwen3_5_moe.py) to validate the new calibration module.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/quantizing_moe/qwen3_5_moe_example.py
    • Added a new example script demonstrating AWQ quantization for Qwen3.5 MoE models, including dataset loading, preprocessing, tokenization, and model saving.
  • src/llmcompressor/modeling/init.py
    • Imported CalibrationQwen3_5MoeSparseMoeBlock to integrate Qwen3.5 MoE support into the modeling package.
  • src/llmcompressor/modeling/qwen3_5_moe.py
    • Introduced CalibrationQwen3_5MoeSparseMoeBlock to enable proper expert calibration for Qwen3.5 MoE models, ensuring all tokens are sent to all experts during calibration.
    • Implemented SequentialQwen3_5MoeExperts to decompose fused 3D expert tensors into individual MLP modules for vLLM serving compatibility.
  • tests/llmcompressor/modeling/test_calib_qwen3_5_moe.py
    • Added a unit test for CalibrationQwen3_5MoeSparseMoeBlock to verify its functionality and ensure correct output during calibration.
Activity
  • No human activity (comments, reviews) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the documentation Improvements or additions to documentation label Mar 4, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds modeling support for the Qwen3.5 MoE family of models, including a calibration module, a usage example, and unit tests. The implementation is well-structured and follows existing patterns in the codebase. The new calibration module correctly handles expert decomposition and routing for calibration. The example and tests are comprehensive. I have one suggestion to improve code maintainability by removing a redundant method override. Overall, this is a great addition.

Comment on lines +105 to +106
def restore(self, original: torch.nn.Module) -> torch.nn.Module:
return original
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The restore method is redundant and can be removed. The base class MoECalibrationModule already provides a correct implementation for permanent modules (where is_permanent=True). Since this module is permanent, its restore method is never called, making this override dead code. Removing it will improve maintainability by relying on the base class implementation and reducing code duplication.

@dsikka
Copy link
Copy Markdown
Collaborator

dsikka commented Mar 4, 2026

Hi @phaelon74 - this is already being worked on #2383

@phaelon74
Copy link
Copy Markdown
Contributor Author

Hi @phaelon74 - this is already being worked on #2383

Very cool, I was not aware of that. I shall close this down and add comments to that one. Thanks @dsikka

@phaelon74 phaelon74 closed this Mar 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants