Skip to content

DeepSeek V3.2 support#2491

Draft
brian-dellabetta wants to merge 82 commits intomainfrom
bdellabe/example-ds32-nvfp4
Draft

DeepSeek V3.2 support#2491
brian-dellabetta wants to merge 82 commits intomainfrom
bdellabe/example-ds32-nvfp4

Conversation

@brian-dellabetta
Copy link
Copy Markdown
Collaborator

@brian-dellabetta brian-dellabetta commented Mar 19, 2026

SUMMARY:
Prerequisite:

Along with partner compressed-tensors PR, this PR adds support for compression of DeepSeek V3.2:

  • ports DeepSeek V3.2 model definition, which originated in the model checkpoint on HF hub and is not currently available in transformers. Update it to be compatible with bfloat16 weights, replacing all kernels/caches specific to FP8_BLOCK with a bfloat16 version
  • prunes logic from reindex_fused_weights, which has been moved to CT in accompanying PR
  • updates update_fused_layer_weight_global_scales to account for multi-latent attention layers and MLP/Expert, by loosening the condition for fusing. Class name is no longer used in check

Note that the vast majority of the ~1300 new lines of code introduced in this PR are the DeepSeek V3.2 model def

Resolves #2156

TEST PLAN:

  • unit tests for update_fused_layer_weight_global_scales
  • nightly test for CT reindex_checkpoint

brian-dellabetta and others added 30 commits January 13, 2026 21:40
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces full support for the DeepSeek V3.2 large language model, enabling its integration and optimization within the framework. It significantly refactors internal weight reindexing mechanisms by leveraging an external library, leading to a more modular and maintainable codebase. Furthermore, the logic for handling global scales during quantization has been enhanced to support diverse model architectures, including the complex multi-latent attention layers found in DeepSeek V3.2, ensuring accurate and efficient compression.

Highlights

  • DeepSeek V3.2 Model Support: Added comprehensive support for the DeepSeek V3.2 model, including its specific architecture, configuration, and custom kernel implementations for quantization. This involves porting model code compatible with bfloat16 weights and handling multi-latent attention layers.
  • Refactored Weight Reindexing Logic: The reindex_fused_weights functionality has been significantly streamlined by offloading core logic to the compressed-tensors library. This change prunes redundant code and centralizes checkpoint reindexing operations.
  • Generalized Global Scale Fusion for Quantization: The utility for updating fused layer weight global scales (update_fused_layer_weight_global_scales) has been generalized. It now dynamically identifies and fuses global scales for various layer types, including MLP/Expert layers, standard attention layers, and DeepSeek's multi-latent attention layers, ensuring consistent quantization behavior.
  • New DeepSeek V3.2 Example: Introduced a new example demonstrating disk offloading and quantization for the DeepSeek V3.2 model, showcasing how to apply NVFP4 and FP8_BLOCK quantization schemes to its MLP and self-attention weights.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the documentation Improvements or additions to documentation label Mar 19, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 19, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for DeepSeek V3.2, including new model and kernel definitions, and refactors checkpoint reindexing logic by moving it to compressed_tensors. It also updates the global scale fusion logic for quantization. However, there are a few critical issues that need to be addressed, such as a syntax error in ModelConfig's constructor, an example script being repurposed without renaming, and several unused parameters in the DeepseekV32ForCausalLM's forward pass.

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify mergify bot removed the quality-failed label Mar 19, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 20, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@brian-dellabetta brian-dellabetta changed the title [WIP] DeepSeek V3.2 support DeepSeek V3.2 support Mar 20, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 24, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @brian-dellabetta.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 24, 2026
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation needs-rebase

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: DeepSeek-v3.2 support

1 participant