Skip to content

Conversation

@shengliangxu
Copy link
Contributor

What does this PR do?

Type of change:

Bug fix.

Overview:

This is change set 2 from working on OMNIML-2917.

Two correlated changes:

  1. when we just quantize the langauge_model submodule, correctly disable quantization of all other modules, we do not need to hard code anything

  2. When we export quantized model to hf unified format, we hard code the exclusion of "lm_head". With the change set 1 where we use the full model for export config generation, we can natually exclude lm_head if it is not quantized. Therefore, remove the hard coded lm_head inclusion in the exclusion list.

Testing

Correctly exported Llama 3.1 70B, Qwen3 VL MoE, Nemotron Super, Llama4 Scout

This is change set 2 from working on OMNIML-2917.

Two correlated changes:

1. when we just quantize the langauge_model submodule, correctly disable
   quantization of all other modules, we do not need to hard code
   anything

2. When we export quantized model to hf unified format, we hard code the
   exclusion of "lm_head". With the change set 1 where we use the full
   model for export config generation, we can natually exclude lm_head
   if it is not quantized. Therefore, remove the hard coded lm_head
   inclusion in the exclusion list.

Signed-off-by: Shengliang Xu <[email protected]>
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 4, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@codecov
Copy link

codecov bot commented Nov 4, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 73.52%. Comparing base (ae915ea) to head (0584694).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #504   +/-   ##
=======================================
  Coverage   73.52%   73.52%           
=======================================
  Files         181      181           
  Lines       18207    18207           
=======================================
  Hits        13387    13387           
  Misses       4820     4820           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@shengliangxu shengliangxu marked this pull request as ready for review November 4, 2025 17:52
@shengliangxu shengliangxu requested review from a team as code owners November 4, 2025 17:52
quantized_state_dict, kv_cache_max_bound, kv_cache_format
)

# Check if any layers are quantized
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just double checking: Has this been handled by the quant config so we don't need to hardcode?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. If lm_head is not quantized, it will be in the exclude_modules natually. In addition, for models that have the language_model as a submodule may have the lm_head under language_model instead of under root model. In these cases, we'll have xxx.language_model.lm_head in exclude module instead of the hardcoded "lm_head", which is not correct to begin with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants