Conversation
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces full support for the DeepSeek V3.2 large language model, enabling its integration and optimization within the framework. It significantly refactors internal weight reindexing mechanisms by leveraging an external library, leading to a more modular and maintainable codebase. Furthermore, the logic for handling global scales during quantization has been enhanced to support diverse model architectures, including the complex multi-latent attention layers found in DeepSeek V3.2, ensuring accurate and efficient compression. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
The quality checks have failed. Please run |
There was a problem hiding this comment.
Code Review
This pull request introduces support for DeepSeek V3.2, including new model and kernel definitions, and refactors checkpoint reindexing logic by moving it to compressed_tensors. It also updates the global scale fusion logic for quantization. However, there are a few critical issues that need to be addressed, such as a syntax error in ModelConfig's constructor, an example script being repurposed without renaming, and several unused parameters in the DeepseekV32ForCausalLM's forward pass.
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
|
The quality checks have failed. Please run |
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
SUMMARY:
Prerequisite:
Along with partner compressed-tensors PR, this PR adds support for compression of DeepSeek V3.2:
reindex_fused_weights, which has been moved to CT in accompanying PRupdate_fused_layer_weight_global_scalesto account for multi-latent attention layers and MLP/Expert, by loosening the condition for fusing. Class name is no longer used in checkNote that the vast majority of the ~1300 new lines of code introduced in this PR are the DeepSeek V3.2 model def
Resolves #2156
TEST PLAN:
update_fused_layer_weight_global_scales