Skip to content

add fp8 block example#2489

Merged
HDCharles merged 4 commits intovllm-project:mainfrom
mengniwang95:fp8_block
Mar 25, 2026
Merged

add fp8 block example#2489
HDCharles merged 4 commits intovllm-project:mainfrom
mengniwang95:fp8_block

Conversation

@mengniwang95
Copy link
Contributor

SUMMARY:
add fp8 block example

denpends on auto-round's main branch

TEST PLAN:

output of the quantized model:

<|begin_of_text|>Hello my name is Ashley and I'm a 21 year old university student. I'm studying a Bachelor of Education (Primary) with a focus on special education. I'm passionate about helping others and making a positive impact on the world.
I have experience working with children and young people, including volunteering at a local school, working as a youth leader at my church, and participating in a community outreach program. I'm confident in my ability to work with children of all ages and abilities, and I'm excited to start

Signed-off-by: Mengni Wang <mengni.wang@intel.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new example that showcases the application of FP8 block quantization to the Llama 3.1 8B Instruct model. This addition provides a practical demonstration of how to configure and execute this specific quantization scheme using the AutoRound modifier within the llmcompressor framework, thereby expanding the available examples for users interested in advanced quantization techniques.

Highlights

  • New FP8 Block Quantization Example: A new example script has been added to demonstrate FP8 block quantization for the Llama 3.1 8B Instruct model using the AutoRound modifier.
  • Documentation Update: The README.md file has been updated to include the newly added FP8 block quantization example in the list of available quantization methods.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@mergify mergify bot added the documentation Improvements or additions to documentation label Mar 19, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new example for FP8 block quantization using the AutoRound modifier, and updates the README.md to reflect this addition. The new example script demonstrates the end-to-end process of loading a model, preparing a calibration dataset, applying the quantization algorithm, and saving the resulting model. This is a valuable addition for showcasing the FP8 block quantization capability.

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mengniwang95 , have you run any benchmarks against an autoround FP8_BLOCK compared to round-to-nearest? We noticed AWQ FP8_BLOCK doesn't perform as well as FP8_DYNAMIC, see discussion starting here. Perhaps this is a place where we can note autoround is a better default choice, at least vs. AWQ?

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs kylesayrs self-assigned this Mar 23, 2026
@mengniwang95
Copy link
Contributor Author

Hi @brian-dellabetta @kylesayrs,
I run some benchmarks locally, and below are the results (the accuracy data fluctuates with each test):

LLMC FP8 AWQ DYNAMIC:
vllm ({'pretrained': 'Meta-Llama-3-8B-Instruct-awq-fp8-dynamic/', 'dtype': 'auto'}), gen_kwargs: ({}), limit: None, num_fewshot: None, batch_size: 16

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.7544 ± 0.0119
strict-match 5 exact_match 0.7559 ± 0.0118

LLMC AWQ FP8 BLOCK:
vllm ({'pretrained': 'Meta-Llama-3-8B-Instruct-awq-fp8-block/', 'dtype': 'auto'}), gen_kwargs: ({}), limit: None, num_fewshot: None, batch_size: 16

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.7491 ± 0.0119
strict-match 5 exact_match 0.7483 ± 0.0120

AR RTN:
vllm ({'pretrained': 'Meta-Llama-3-8B-Instruct-FP8-BLOCK-AutoRound/', 'dtype': 'auto'}), gen_kwargs: ({}), limit: None, num_fewshot: None, batch_size: 16

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.7483 ± 0.0120
strict-match 5 exact_match 0.7521 ± 0.0119

AR Tuning iters=200
vllm ({'pretrained': 'Meta-Llama-3-8B-Instruct-FP8-BLOCK-AutoRound/', 'dtype': 'auto'}), gen_kwargs: ({}), limit: None, num_fewshot: None, batch_size: 16

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.7589 ± 0.0118
strict-match 5 exact_match 0.7619 ± 0.0117

BTW, the AR results depend on the one bug fix PR, if you want to try, please install AR with this branch: https://github.com/intel/auto-round/tree/mengni/fp8_fix

@brian-dellabetta
Copy link
Collaborator

@mengniwang95 thanks for the benchmarks. It's always such a subtle difference in these evals, but it does seem fair to say to users AWQ is not a great algorithm for FP8_BLOCK, and autoround would be a better default choice? (FP8_BLOCK is also currently unsupported in GPTQ)

Copy link
Collaborator

@HDCharles HDCharles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment,

otherwise this is good to land imo

@HDCharles HDCharles added the ready When a PR is ready for review label Mar 25, 2026
@HDCharles HDCharles enabled auto-merge (squash) March 25, 2026 20:24
@HDCharles HDCharles merged commit 891863b into vllm-project:main Mar 25, 2026
13 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants