Skip to content

[not for land] 92 ddp ct api#2355

Closed
HDCharles wants to merge 9 commits intomainfrom
92_ddp_ct_api
Closed

[not for land] 92 ddp ct api#2355
HDCharles wants to merge 9 commits intomainfrom
92_ddp_ct_api

Conversation

@HDCharles
Copy link
Collaborator

testing against CT

some specifics to work through as apis are updated in compressed tensors

Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
@HDCharles HDCharles marked this pull request as draft February 11, 2026 20:55
@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @HDCharles, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the llmcompressor library by integrating Distributed Data Parallel (DDP) capabilities into the GPTQ quantization process. The changes enable efficient quantization of large language models across multiple devices by distributing the workload, accumulating necessary statistics, and synchronizing the quantized parameters. This allows for improved scalability and performance when applying GPTQ quantization.

Highlights

  • Distributed Quantization Support: Implemented Distributed Data Parallel (DDP) support for GPTQ quantization within the GPTQModifier, allowing quantization to be performed across multiple GPUs.
  • Distributed Hessian Accumulation and Parameter Broadcasting: Refactored the compress_modules method to assign modules to specific ranks, accumulate hessians in a distributed manner, and broadcast the final quantized parameters to all ranks.
  • New DDP Testing Infrastructure: Introduced run.sh and test_ddp.py scripts to facilitate comprehensive testing of DDP quantization across different model configurations, device maps, and GPU counts, including metric collection and model saving.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • run.sh
    • Added a new shell script to orchestrate DDP quantization experiments, iterating through different model IDs, device maps, and GPU counts.
    • Included logic to execute test_ddp.py with specified parameters and append evaluation metrics to a JSON output file.
  • src/llmcompressor/modifiers/quantization/gptq/base.py
    • Imported torch.distributed to enable DDP functionalities.
    • Refactored the compress_modules method to handle both non-distributed and distributed quantization workflows.
    • Implemented logic for assigning modules to specific ranks, accumulating hessian information across ranks, and broadcasting quantized weights, scales, and zero points.
    • Extracted the core logic for quantizing a single module into a new compress_single_module method for better modularity.
  • test_ddp.py
    • Added a new Python script designed to perform DDP GPTQ quantization using llmcompressor.
    • Included argument parsing for model ID, device map, save directory, and output file for metrics.
    • Integrated model loading with compressed_tensors.offload, dataset loading, tokenization, and the oneshot quantization pipeline.
    • Implemented metric collection for elapsed time and peak GPU memory, gathering these metrics across all DDP ranks.
    • Added functionality for sample generation post-quantization and saving the quantized model and tokenizer.
    • Included logic to export detailed run metrics to a JSON file, handling existing data for appending new runs.
Activity
  • The pull request is marked as '[not for land]', indicating it is likely for testing or experimental purposes and not intended for immediate merge.
  • The description 'testing against CT' suggests that the changes are being validated against the 'compressed_tensors' library, which is integral to the DDP quantization implementation.
  • A new run.sh script was added to automate the execution of DDP quantization tests with various configurations.
  • A new test_ddp.py script was introduced to perform the actual DDP quantization, collect performance metrics, and save the results.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify
Copy link
Contributor

mergify bot commented Feb 11, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for distributed data parallel (DDP) execution within the GPTQ quantization modifier. The core changes are in src/llmcompressor/modifiers/quantization/gptq/base.py, where logic is added to handle Hessian matrix accumulation and quantized parameter broadcasting across multiple ranks. To support this, a new test script test_ddp.py and a runner script run.sh are added to facilitate experimentation and validation of the DDP implementation. My review identifies a potential correctness issue in the distributed Hessian calculation, along with several opportunities to improve the robustness and clarity of the new test scripts.

Comment on lines +295 to +296
# dist.reduce(n, op=dist.ReduceOp.SUM, dst=target_rank) # REMOVE?
# H/=n # REMOVE?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The distributed Hessian accumulation logic appears to be incomplete. The commented-out lines for reducing the number of samples (n) and normalizing the Hessian (H) are crucial for correctly calculating the global average Hessian. Without them, the Hessian on the target_rank is a sum of scaled Hessians (sum(H_i * n_i)) rather than a proper average. This will likely lead to incorrect results from quantize_weight. These lines should be re-enabled to ensure correctness.

)
elapsed_time = time.time() - start
print(f"\nPipeline took {elapsed_time} seconds, rank={dist.get_rank()}")
peak_memory_gb = torch.cuda.max_memory_allocated() / (1024**3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The call to torch.cuda.max_memory_allocated() will raise an error if the script is executed on a machine without a CUDA-enabled GPU or if PyTorch was built without CUDA support. Since run.sh includes a "cpu" device map configuration, this will cause a crash. You should guard this call with a torch.cuda.is_available() check to ensure the script can run in CPU-only environments.

Suggested change
peak_memory_gb = torch.cuda.max_memory_allocated() / (1024**3)
peak_memory_gb = torch.cuda.max_memory_allocated() / (1024**3) if torch.cuda.is_available() else 0.0

for MODEL_ID in "${MODEL_IDS[@]}"; do
for DEVICE_MAP in "${DEVICE_MAPS[@]}"; do
for NUM_GPUS in "${GPU_COUNTS[@]}"; do
export SAVE_DIR=$MODEL_ID-$NUM_GPUS-$DEVICE_MAP
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The MODEL_ID variable can contain slashes (e.g., "TinyLlama/TinyLlama-1.1B-Chat-v1.0"), which will be included in the SAVE_DIR. This results in the creation of nested directories, which might be unintended. It would be cleaner and safer to sanitize the MODEL_ID by replacing slashes with a different character, like a hyphen, to ensure a flat directory structure for saved models.

Suggested change
export SAVE_DIR=$MODEL_ID-$NUM_GPUS-$DEVICE_MAP
export SAVE_DIR="$(echo "$MODEL_ID" | tr '/' '-')-$NUM_GPUS-$DEVICE_MAP"

self._hessians[module]=H

# delete unneeded info
self._num_samples.pop(module,None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line redundantly pops module from self._num_samples. The key was already popped on line 292. Please remove this line.

test_ddp.py Outdated
args = parser.parse_args()

### USER API: torchrun --nproc_per_node=2 test_ddp.py --<args or just leave defaults>
args = parser.parse_args()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line is a redundant call to parser.parse_args(), which was already called on line 50. Please remove it.

test_ddp.py Outdated
### USER API: torchrun --nproc_per_node=2 test_ddp.py --<args or just leave defaults>
args = parser.parse_args()

from compressed_tensors.offload import offload_model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is a duplicate import of offload_model, which was already imported on line 6. Please remove this line.

Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
Summary

Signed-off-by: HDCharles <charlesdavidhernandez@gmail.com>
@mergify mergify bot removed the quality-failed label Feb 13, 2026
@mergify
Copy link
Contributor

mergify bot commented Feb 13, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant