Skip to content

[examples][awq] Update AWQ examples to stacked recipe pattern#2461

Open
dzhengAP wants to merge 1 commit intovllm-project:mainfrom
dzhengAP:dongqi/awq-stacked-examples
Open

[examples][awq] Update AWQ examples to stacked recipe pattern#2461
dzhengAP wants to merge 1 commit intovllm-project:mainfrom
dzhengAP:dongqi/awq-stacked-examples

Conversation

@dzhengAP
Copy link
Copy Markdown
Contributor

@dzhengAP dzhengAP commented Mar 10, 2026

Summary

Updates AWQ examples and README to use the canonical stacked recipe pattern per #2327.
Depends on: #2327 (AWQModifier restructuring to decouple smoothing from quantization)

AWQModifier is a smoothing pre-pass (like SmoothQuantModifier). The correct usage is:

recipe = [
    AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
    QuantizationModifier(scheme="W4A16_ASYM", targets=["Linear"], ignore=["lm_head"]),
]

Changes

  • llama_example.py: updated to explicit [AWQModifier, QuantizationModifier] stack
  • llama_gptq_example.py: new example showing [AWQModifier, GPTQModifier]
  • README.md: documents both stacking patterns

Related

Part of #2327

@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@mergify mergify bot added the documentation Improvements or additions to documentation label Mar 10, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the AWQ (Activation-aware Weight Quantization) examples and documentation to align with a new, explicit stacked recipe pattern. The core change clarifies that AWQModifier functions as a smoothing pre-pass, preparing the model for subsequent weight quantization by another modifier like QuantizationModifier or GPTQModifier. This ensures a more robust and understandable approach to applying AWQ, providing distinct examples for different quantization strategies and improving overall clarity for users.

Highlights

  • Updated AWQ Examples: Existing AWQ examples were updated to follow a new, canonical stacked recipe pattern, clarifying the role of AWQModifier as a pre-pass.
  • New GPTQ Example: A new example (llama_gptq_example.py) was introduced to demonstrate stacking AWQModifier with GPTQModifier for higher accuracy quantization.
  • Documentation Clarity: The README.md was enhanced to clearly explain AWQModifier as a smoothing pre-pass and document both AWQ + QuantizationModifier and AWQ + GPTQModifier stacking patterns.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/awq/README.md
    • Updated the explanation of AWQModifier as a pre-pass and added code examples for stacking it with QuantizationModifier (RTN) and GPTQModifier.
  • examples/awq/llama_example.py
    • Modified the recipe to explicitly include QuantizationModifier after AWQModifier and added the necessary import.
  • examples/awq/llama_gptq_example.py
    • Added a new file that provides a complete example of applying AWQModifier followed by GPTQModifier for model quantization.
Activity
  • No specific activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the AWQ examples to use the canonical stacked recipe pattern, clarifying that AWQModifier is a pre-pass combined with a quantization modifier. The llama_example.py and new llama_gptq_example.py provide clear references, and the README.md is more informative. A security audit found no high-severity vulnerabilities; the use of trust_remote_code=True is noted but poses no immediate risk due to the hardcoded, reputable model ID. Minor suggestions were made to improve consistency between the README examples and the actual example files.


```python
recipe = [
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The AWQModifier in this example recipe is missing the duo_scaling="both" argument, which is present in the corresponding llama_example.py file. For consistency and to showcase a more complete example, it would be beneficial to include it here.

Suggested change
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"], duo_scaling="both"),


```python
recipe = [
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the previous example, the AWQModifier here is missing the duo_scaling="both" argument which is present in llama_gptq_example.py. Adding it would improve consistency between the documentation and the example code.

Suggested change
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"]),
AWQModifier(ignore=["lm_head"], scheme="W4A16_ASYM", targets=["Linear"], duo_scaling="both"),

@HDCharles
Copy link
Copy Markdown
Collaborator

I thought there was still a lot of work needed to enable this stacked flow since AWQ does quantization by default. if thats the case can you mark this as dependent on those PRs?

@dzhengAP
Copy link
Copy Markdown
Contributor Author

I thought there was still a lot of work needed to enable this stacked flow since AWQ does quantization by default. if thats the case can you mark this as dependent on those PRs?

Yeah, added Depends on #2327. This PR is intended as the example/doc update that follows the restructuring work.

Copy link
Copy Markdown
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great examples!

@brian-dellabetta
Copy link
Copy Markdown
Collaborator

Examples should not be modified until the AWQ refactor lands. I'm assuming we will want to update the examples as part of that PR, rather than splitting it out into 2. A community user is looking into the refactor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants