Conversation
Signed-off-by: Mengni Wang <mengni.wang@intel.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new example that showcases the application of FP8 block quantization to the Llama 3.1 8B Instruct model. This addition provides a practical demonstration of how to configure and execute this specific quantization scheme using the AutoRound modifier within the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Code Review
This pull request introduces a new example for FP8 block quantization using the AutoRound modifier, and updates the README.md to reflect this addition. The new example script demonstrates the end-to-end process of loading a model, preparing a calibration dataset, applying the quantization algorithm, and saving the resulting model. This is a valuable addition for showcasing the FP8 block quantization capability.
examples/autoround/quantization_w8a8_fp8/llama3.1_block_quant_example.py
Show resolved
Hide resolved
examples/autoround/quantization_w8a8_fp8/llama3.1_block_quant_example.py
Show resolved
Hide resolved
brian-dellabetta
left a comment
There was a problem hiding this comment.
Hi @mengniwang95 , have you run any benchmarks against an autoround FP8_BLOCK compared to round-to-nearest? We noticed AWQ FP8_BLOCK doesn't perform as well as FP8_DYNAMIC, see discussion starting here. Perhaps this is a place where we can note autoround is a better default choice, at least vs. AWQ?
|
Hi @brian-dellabetta @kylesayrs, LLMC FP8 AWQ DYNAMIC:
LLMC AWQ FP8 BLOCK:
AR RTN:
AR Tuning iters=200
BTW, the AR results depend on the one bug fix PR, if you want to try, please install AR with this branch: https://github.com/intel/auto-round/tree/mengni/fp8_fix |
|
@mengniwang95 thanks for the benchmarks. It's always such a subtle difference in these evals, but it does seem fair to say to users AWQ is not a great algorithm for FP8_BLOCK, and autoround would be a better default choice? (FP8_BLOCK is also currently unsupported in GPTQ) |
SUMMARY:
add fp8 block example
denpends on auto-round's main branch
TEST PLAN:
output of the quantized model: