-
Notifications
You must be signed in to change notification settings - Fork 192
[OMNIML-2917] handle lm_head and other un-quantized modules correctly #504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This is change set 2 from working on OMNIML-2917. Two correlated changes: 1. when we just quantize the langauge_model submodule, correctly disable quantization of all other modules, we do not need to hard code anything 2. When we export quantized model to hf unified format, we hard code the exclusion of "lm_head". With the change set 1 where we use the full model for export config generation, we can natually exclude lm_head if it is not quantized. Therefore, remove the hard coded lm_head inclusion in the exclusion list. Signed-off-by: Shengliang Xu <[email protected]>
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #504 +/- ##
=======================================
Coverage 73.52% 73.52%
=======================================
Files 181 181
Lines 18207 18207
=======================================
Hits 13387 13387
Misses 4820 4820 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| quantized_state_dict, kv_cache_max_bound, kv_cache_format | ||
| ) | ||
|
|
||
| # Check if any layers are quantized |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just double checking: Has this been handled by the quant config so we don't need to hardcode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. If lm_head is not quantized, it will be in the exclude_modules natually. In addition, for models that have the language_model as a submodule may have the lm_head under language_model instead of under root model. In these cases, we'll have xxx.language_model.lm_head in exclude module instead of the hardcoded "lm_head", which is not correct to begin with.
What does this PR do?
Type of change:
Bug fix.
Overview:
This is change set 2 from working on OMNIML-2917.
Two correlated changes:
when we just quantize the langauge_model submodule, correctly disable quantization of all other modules, we do not need to hard code anything
When we export quantized model to hf unified format, we hard code the exclusion of "lm_head". With the change set 1 where we use the full model for export config generation, we can natually exclude lm_head if it is not quantized. Therefore, remove the hard coded lm_head inclusion in the exclusion list.
Testing
Correctly exported Llama 3.1 70B, Qwen3 VL MoE, Nemotron Super, Llama4 Scout