Conversation
…cing Signed-off-by: linfeng-yuan <1102311262@qq.com>
Signed-off-by: linfeng-yuan <1102311262@qq.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors and enhances the Mixture-of-Experts (MoE) gating and MXFP8 quantization scale handling within the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
What this PR does / why we need it?
This pull request introduces a DeviceOperator class to centralize device-specific operations, particularly for Mixture-of-Experts (MoE) gating and MXFP8 quantization scale normalization.
Key changes include:
- Addition of
normalize_mxfp8_scale_layoutandmoe_gating_top_kstatic methods toDeviceOperatorto encapsulate logic for MXFP8 scale handling and MoE gating, respectively. - Integration of these new
DeviceOperatormethods intonpu_dynamic_quant,npu_grouped_matmul_swiglu_quant,experts_selector.py, andmoe_mlp.pyto ensure consistent and centralized handling of these operations. - Refinement of the
enable_force_load_balancemechanism inw8a8_mxfp8.pyto use a more sophisticated random expert selection approach, improving load balancing for profile runs.
This refactoring aims to improve code organization, maintainability, and consistency across different quantization and MoE related operations.
Fixes #
Does this PR introduce any user-facing change?
No, this PR primarily involves internal refactoring and optimization of device operations and quantization logic. The external API and user-facing behavior should remain unchanged.
How was this patch tested?
CI passed with existing tests. No new tests were added as the changes are internal refactoring and optimization, and existing tests should cover the functionality.
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?