Skip to content

Conversation

@meenchen
Copy link
Contributor

@meenchen meenchen commented Oct 9, 2025

What does this PR do?

Type of change: ?

Overview:

This PR and NVIDIA/TensorRT-LLM#8698 enable NVFP4 AWQ deployment for TRT-LLM. Specifically, this PR fuses pre_quant_scale in following two cases:

  • For MLP, pre_quant_scale of gate_proj layer is fused into up_proj's weight, so we don't need an extra handle in downstream fused moe kernels.
  • For attention, we will try to fuse the pre_quant_scale of o_proj to v_proj if their dimensions match, which means we will skip fusion for MQA/GQA models.

Usage

# Add a code snippet demonstrating how to use this

Testing

unit test, e2e test for Qwen3 dense and moe models.

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@copy-pr-bot
Copy link

copy-pr-bot bot commented Oct 9, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 9, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch weimingc/fuse_pqs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Oct 9, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 73.44%. Comparing base (32d168c) to head (a591330).
⚠️ Report is 39 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #421      +/-   ##
==========================================
+ Coverage   73.38%   73.44%   +0.06%     
==========================================
  Files         180      180              
  Lines       17934    18147     +213     
==========================================
+ Hits        13160    13328     +168     
- Misses       4774     4819      +45     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@meenchen meenchen self-assigned this Oct 14, 2025
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
@meenchen meenchen changed the title Pattern-based fusion for pre_quant_scale Fusing pre_quant_scale for NVFP4 AWQ Nov 3, 2025
@meenchen meenchen changed the title Fusing pre_quant_scale for NVFP4 AWQ [OMNIML-2932] Fusing pre_quant_scale for NVFP4 AWQ Nov 3, 2025
@meenchen meenchen marked this pull request as ready for review November 3, 2025 23:39
@meenchen meenchen requested a review from a team as a code owner November 3, 2025 23:39
Signed-off-by: weimingc <[email protected]>
Signed-off-by: weimingc <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants