Skip to content

[Refactor] Use forward mapping instead of reverse mapping in AscendModelSlimConfig#7596

Open
Feng-xiaosuo wants to merge 17 commits intovllm-project:mainfrom
Feng-xiaosuo:main
Open

[Refactor] Use forward mapping instead of reverse mapping in AscendModelSlimConfig#7596
Feng-xiaosuo wants to merge 17 commits intovllm-project:mainfrom
Feng-xiaosuo:main

Conversation

@Feng-xiaosuo
Copy link
Contributor

@Feng-xiaosuo Feng-xiaosuo commented Mar 24, 2026

What this PR does / why we need it?

This PR refactors the AscendModelSlimConfig class to use forward mapping instead of reverse mapping for quantization config key transformation.

Changes:

  1. Modified apply_vllm_mapper() to directly apply hf_to_vllm_mapper.apply_dict() to transform quant_description keys from HF format to vLLM format
  2. Simplified quant_prefix_mapper() to return the prefix directly (no longer needs mapping since keys are already in vLLM format)
  3. Removed QUANT_MODEL_PREFIX_MAPPINGS dictionary (~50 lines) - no longer needed
  4. Removed get_prefix_mapping() function - no longer needed
  5. Removed vllm_to_hf_mapper attribute - no longer needed

Why this change is needed:

The previous implementation used reverse mapping (vLLM → HF) which had several issues:

  • Some keys might not be used in the forward direction but would be incorrectly used in reverse
  • Empty values in the mapping would cause issues when reversed
  • Required maintaining a separate QUANT_MODEL_PREFIX_MAPPINGS dict that duplicated information already available in vLLM's model-specific WeightsMapper

The new approach:

  • Uses the forward mapping (HF → vLLM) directly from vLLM's WeightsMapper
  • Eliminates the need for duplicate mapping definitions
  • Avoids issues with reverse mapping (unused keys, empty values)
  • Aligns with how compressed_tensors_config.py handles the same scenario

Does this PR introduce any user-facing change?

No. This is an internal refactoring that does not change any user-facing API or behavior.

How was this patch tested?

  • The changes were validated using an offline deployment script to launch and verify multiple multimodal models, including Qwen2.5-VL, Qwen3-VL, Kimi2.5, and Qwen-Omni.
  • vLLM version: v0.18.0
  • vLLM main: vllm-project/vllm@ed359c4

Matrix_K and others added 16 commits March 6, 2026 13:52
…he quantization layer name

Signed-off-by: Matrix_K <zhangke144@huawei.com>
Signed-off-by: Matrix_K <zhangke144@huawei.com>
Signed-off-by: Matrix_K <zhangke144@huawei.com>
Signed-off-by: Feng-xiaosuo <tengchang1@huawei.com>
Signed-off-by: Matrix_K <zhangke144@huawei.com>
Signed-off-by: Matrix_K <zhangke144@huawei.com>
Signed-off-by: Feng-xiaosuo <tengchang1@huawei.com>
Signed-off-by: Matrix_K <zhangke144@huawei.com>
Co-authored-by: Wang Kunpeng <1289706727@qq.com>
Signed-off-by: Feng-xiaosuo <tengchang1@huawei.com>
Signed-off-by: Feng-xiaosuo <tengchang1@huawei.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the AscendModelSlimConfig class to enhance the handling of quantization configuration keys. It transitions from an error-prone reverse mapping approach (vLLM to HF) to a more direct and robust forward mapping strategy (HF to vLLM). This change streamlines the codebase by eliminating redundant mapping definitions and resolves issues associated with unused or empty keys in the previous reverse mapping, aligning the process with existing vLLM mechanisms.

Highlights

  • Forward Mapping for Quantization Config: Modified apply_vllm_mapper() to directly apply hf_to_vllm_mapper.apply_dict() to transform quant_description keys from HF format to vLLM format, eliminating the need for reverse mapping.
  • Simplified Prefix Mapper: Simplified quant_prefix_mapper() to return the prefix directly, as keys are now consistently in vLLM format and no longer require complex mapping logic.
  • Removed Redundant Mapping Data: Removed the QUANT_MODEL_PREFIX_MAPPINGS dictionary, which was approximately 50 lines of code, as it is no longer needed with the new forward mapping approach.
  • Removed Unused Function: Removed the get_prefix_mapping() function, which was previously used in conjunction with the now-removed QUANT_MODEL_PREFIX_MAPPINGS.
  • Removed Reverse Mapper Attribute: Removed the vllm_to_hf_mapper attribute, as the system no longer relies on reverse mapping.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a nice refactoring of AscendModelSlimConfig that simplifies the quantization key mapping logic. It correctly switches from a complex reverse mapping approach to a simpler forward mapping, which is applied once to transform the quantization description keys. This eliminates a significant amount of special-cased mapping code and makes the implementation cleaner and less error-prone.

I've found one potential issue in the new implementation of apply_vllm_mapper where calling it with different mappers could lead to incorrect behavior due to in-place modification of the configuration. I've suggested a change to make this more robust.

Comment on lines +420 to 421
if self._mapper_applied and self.hf_to_vllm_mapper is hf_to_vllm_mapper:
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current logic for checking if the mapper has been applied is not fully robust. If apply_vllm_mapper is called a second time with a different mapper instance, the condition self.hf_to_vllm_mapper is hf_to_vllm_mapper will be false. The code would then proceed to apply the new mapping on the quant_description keys which have already been transformed, leading to incorrect behavior.

Since apply_dict modifies the quant_description state, re-applying a mapping is not safe. To prevent this potential bug, you should add a more robust check to error out if a different mapper is provided after the first application.

Suggested change
if self._mapper_applied and self.hf_to_vllm_mapper is hf_to_vllm_mapper:
return
if self._mapper_applied:
if self.hf_to_vllm_mapper is not hf_to_vllm_mapper:
raise RuntimeError(
"apply_vllm_mapper() has already been called with a different "
"mapper. Re-applying the mapping is not supported as it "
"modifies the quantization description in-place."
)
return

@github-actions
Copy link
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@Feng-xiaosuo Feng-xiaosuo force-pushed the main branch 4 times, most recently from f01eab1 to 5199150 Compare March 25, 2026 01:48
Signed-off-by: Matrix_K <zhangke144@huawei.com>
@MengqingCao MengqingCao added this to the v0.18.0rc1 milestone Mar 25, 2026
@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Mar 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:quantization ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants