-
Notifications
You must be signed in to change notification settings - Fork 248
Add file to linearize and quantize the gpt-oss models #1831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @shubhra, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new module designed to enable the linearization and quantization of GPT-OSS models, particularly addressing their Mixture of Experts (MoE) architecture. It provides a mechanism to convert the model's MoE layers into a format suitable for quantization, defines the necessary custom MLP components, and integrates a full workflow for applying FP8 dynamic quantization using a specified recipe and calibration dataset. The overall goal is to facilitate the compression of GPT-OSS models for improved efficiency.
Highlights
- GPT-OSS MoE Linearization: Introduces
convert_model_for_quantization_gptoss
to transform GPT-OSS's fused-expert Mixture of Experts (MoE) layers into a sequential structure of individualGPTOSSMLP
modules, making them compatible with quantization. - Custom MLP Implementation: Adds
GPTOSSMLP
to represent individual expert MLPs, including specific activation functions withclamp
andsigmoid
operations. - Sequential MoE Handling: Implements
SequentialGPTOSSMoE
to manage the individualGPTOSSMLP
experts, copy weights from the original fused MoE, and integrate with the existing router for expert selection. - FP8 Dynamic Quantization: Demonstrates the application of an
FP8_DYNAMIC
quantization scheme usingllmcompressor
'sQuantizationModifier
, with specific layers likelm_head
,self_attn
,attn
,attention
, androuter
ignored. - Calibration Data Pipeline: Includes a complete pipeline for loading and preprocessing calibration data from the
HuggingFaceH4/ultrachat_200k
dataset for the quantization process.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands on the current page.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a script to linearize and quantize GPT-OSS models by replacing the fused MoE implementation with a sequential one. The approach is sound, but the script has some issues that affect its reusability and performance. Specifically, it contains hardcoded file paths that should be parameterized. The main execution logic should also be placed within an if __name__ == "__main__":
block. Additionally, there's an opportunity to optimize a loop in the MoE forward pass for better performance during calibration. I've included suggestions to address these points.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Thanks for your contribution. FYI, |
self.intermediate_size = intermediate_size | ||
self.alpha = 1.702 | ||
self.limit = 7.0 | ||
self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add dtype preservation here as well
self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=True, dtype=dtype)
self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=True, dtype=dtype)
self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=True, dtype=dtype)
Thanks so much for adding this!! |
down_w = dwn[i] # [I, H] | ||
|
||
mlp = self.experts[i] | ||
mlp.gate_proj.weight.data.copy_(gate_w.T) # [I, H] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use update_offload_parameter
here?
with align_module_device(experts):
for expert_index, expert in enumerate(self.experts):
update_offload_parameter(
expert.gate_proj,
"weight",
experts.gate_up_proj[expert_index, ..., ::2].T,
)
SUMMARY:
Code to linearize and quantize the gpt-oss models