Skip to content

[Qwen3VLMoe] Add linearized definition and FP8 Quantization Example#1874

Merged
dsikka merged 11 commits intomainfrom
qwen3VLMoE_lineared
Oct 1, 2025
Merged

[Qwen3VLMoe] Add linearized definition and FP8 Quantization Example#1874
dsikka merged 11 commits intomainfrom
qwen3VLMoE_lineared

Conversation

@dsikka
Copy link
Copy Markdown
Collaborator

@dsikka dsikka commented Sep 27, 2025

SUMMARY:

  • Updates the MoE layer to use a linearized definition such that we can quantize and run the model in vLLM
  • Wraps the gate layer so that it is properly ignored - this is hack for now. We will need to do this properly in ct
  • Not adding forward pass for now; will add a forward pass as a follow-up but would like it in for the release to enable FP8 quantization
  • Note - requires latest transformers

TEST PLAN:

  • Produces /proving-grounds/engine/hub_cache/Qwen3-VL-235B-A22B-Instruct-FP8_DYNAMIC which generates coherent generations:
if __name__ == '__main__':
    import torch

    from vllm import LLM, SamplingParams
    import torch

    prompts = [
        "The Swiss Alps are", 
        "Brad Marchand is",
        "The Toronto Maple Leafs are"
    ]

    # Create a sampling params object for greedy sampling
    sampling_params = SamplingParams(temperature=0.80, top_p=0.95, max_tokens=40, min_tokens=10)
    llm = LLM("/proving-grounds/engine/hub_cache/Qwen3-VL-235B-A22B-Instruct-FP8_DYNAMIC", tensor_parallel_size=2, max_model_len=4096, enforce_eager=True)
    output = llm.generate(prompts, sampling_params)
    for out in output:
        print(out.outputs[0].text)

Generations:

 a true paradise for nature lovers and outdoor enthusiasts. With their snow-capped peaks, lush green valleys, and crystal-clear lakes, the Alps offer a stunning backdrop for a wide range of activities. Whether
 a prominent figure in the NHL, known for his exceptional performance and leadership. He has won the Art Ross Trophy as the NHL's leading scorer, with 110 points (32 goals and
 a professional ice hockey team based in Toronto, Ontario, Canada. They are members of the Atlantic Division in the Eastern Conference of the National Hockey League (NHL). The team was established in 1

@dsikka dsikka marked this pull request as draft September 27, 2025 20:33
@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @dsikka, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request aims to extend the model compression framework by adding support for linearizing the Qwen3VLMoe model's text sparse Mixture-of-Experts blocks. This enhancement allows for specialized handling and potential optimization of this specific model architecture within the llmcompressor library.

Highlights

  • Qwen3VLMoe Linearization: Introduced new classes LinearQwen3VLMoeTextSparseMoeBlock and SequentialQwen3VLMoeTextExperts to provide a linearized definition for the Qwen3VLMoe model's text sparse Mixture-of-Experts (MoE) block.
  • Integration into Model Preparation: The prepare.py module was updated to import the new Qwen3VLMoe replacement function and register Qwen3VLMoeTextSparseMoeBlock for linearization during model calibration.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to add a linearized module definition for Qwen3VLMoe, likely to aid in model calibration. While the intent is clear, the implementation has a few critical issues that will prevent it from working correctly. There's an incorrect import path for the new module, and the new module itself is incomplete, with an incorrect function signature for its replacement function and a missing forward method in the main module class. My review includes specific suggestions to fix these issues.

@dsikka dsikka changed the title [Qwen3VLMoe] Add linearized definition [Qwen3VLMoe] Add linearized definition and FP8 Quantization Example Oct 1, 2025
@dsikka dsikka added the ready When a PR is ready for review label Oct 1, 2025
@dsikka dsikka marked this pull request as ready for review October 1, 2025 00:30
@dsikka dsikka requested a review from kylesayrs October 1, 2025 00:32
rahul-tuli
rahul-tuli previously approved these changes Oct 1, 2025
Copy link
Copy Markdown
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsikka
Copy link
Copy Markdown
Collaborator Author

dsikka commented Oct 1, 2025

Can you add a test in this style? https://github.com/vllm-project/llm-compressor/blob/main/tests/llmcompressor/modeling/test_calib_qwen3.py

No because I haven't written the forward pass yet to validate all experts are calibrated.
So far only landing datafree support so that we can enable fp8 support

kylesayrs
kylesayrs previously approved these changes Oct 1, 2025
Copy link
Copy Markdown
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add test in follow up

@dsikka dsikka enabled auto-merge (squash) October 1, 2025 16:22
Copy link
Copy Markdown
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! i think it's fair to say users will only hit the Qwen 3 VL Moe import now if they are using the model, so we don't need to wrap the import in a try/catch

@dsikka dsikka disabled auto-merge October 1, 2025 17:01
@dsikka dsikka merged commit be99dc3 into main Oct 1, 2025
8 checks passed
@dsikka dsikka deleted the qwen3VLMoE_lineared branch October 1, 2025 17:02
dsikka added a commit that referenced this pull request Oct 1, 2025
SUMMARY:
- Need to update links when the following PRs land:

1. #1886
2. #1874
3. #1889
cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Oct 2, 2025
…llm-project#1874)

SUMMARY:
- Updates the MoE layer to use a linearized definition such that we can
quantize and run the model in vLLM
- Wraps the gate layer so that it is properly ignored - this is hack for
now. We will need to do this properly in ct
- Not adding forward pass for now; will add a forward pass as a
follow-up but would like it in for the release to enable FP8
quantization
- Note - requires latest transformers

TEST PLAN:
- Produces
`/proving-grounds/engine/hub_cache/Qwen3-VL-235B-A22B-Instruct-FP8_DYNAMIC`
which generates coherent generations:

```python
if __name__ == '__main__':
    import torch

    from vllm import LLM, SamplingParams
    import torch

    prompts = [
        "The Swiss Alps are",
        "Brad Marchand is",
        "The Toronto Maple Leafs are"
    ]

    # Create a sampling params object for greedy sampling
    sampling_params = SamplingParams(temperature=0.80, top_p=0.95, max_tokens=40, min_tokens=10)
    llm = LLM("/proving-grounds/engine/hub_cache/Qwen3-VL-235B-A22B-Instruct-FP8_DYNAMIC", tensor_parallel_size=2, max_model_len=4096, enforce_eager=True)
    output = llm.generate(prompts, sampling_params)
    for out in output:
        print(out.outputs[0].text)

```

Generations:
```bash
 a true paradise for nature lovers and outdoor enthusiasts. With their snow-capped peaks, lush green valleys, and crystal-clear lakes, the Alps offer a stunning backdrop for a wide range of activities. Whether
 a prominent figure in the NHL, known for his exceptional performance and leadership. He has won the Art Ross Trophy as the NHL's leading scorer, with 110 points (32 goals and
 a professional ice hockey team based in Toronto, Ontario, Canada. They are members of the Atlantic Division in the Eastern Conference of the National Hockey League (NHL). The team was established in 1

```

Signed-off-by: Cassie Jeon <cajeon@redhat.com>
cajeonrh pushed a commit to cajeonrh/llm-compressor that referenced this pull request Oct 2, 2025
SUMMARY:
- Need to update links when the following PRs land:

1. vllm-project#1886
2. vllm-project#1874
3. vllm-project#1889

Signed-off-by: Cassie Jeon <cajeon@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants