Skip to content

Conversation

s-jiayang
Copy link
Contributor

@s-jiayang s-jiayang commented Aug 14, 2025

What this PR does / why we need it?

This PR the npu_add_rms_norm_quant operator is enabled

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an optimization for Qwen-MoE models on Ascend NPUs by utilizing the npu_add_rms_norm_quant fused operator. The changes are controlled by a new environment variable USE_ADD_RMSNORM_QUANT. The implementation correctly adds a conditional path to use this new operator when applicable.

However, I've identified a critical issue in the implementation of AddRMSNormW8A8Quant where the eps value from the model configuration is ignored due to an incorrect call to the superclass constructor. This will cause the layer to use a default epsilon, potentially leading to numerical inconsistencies and incorrect model outputs. Please see the detailed comment for the fix.

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

"Expected quant_config to be an instance of AscendQuantConfig"
if isinstance(self.self_attn.qkv_proj.quant_method.quant_method,
AscendW8A8LinearMethod):
self.input_layernorm = AddRMSNormW8A8Quant(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will use the torch.fx rewriter to solve this kind of problem.

Please refer #2389

@s-jiayang s-jiayang closed this Aug 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants