Skip to content

[Feature] Allow targeting multiples of sequential targets#2493

Open
aayush7511 wants to merge 2 commits intovllm-project:mainfrom
aayush7511:feat/targets-per-subgraph
Open

[Feature] Allow targeting multiples of sequential targets#2493
aayush7511 wants to merge 2 commits intovllm-project:mainfrom
aayush7511:feat/targets-per-subgraph

Conversation

@aayush7511
Copy link

SUMMARY:
Closes #2481

TEST PLAN:
"please outline how the changes were tested"

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to the model tracing functionality by enabling the grouping of multiple sequential targets into a single subgraph. This new capability provides users with finer control over the granularity of subgraphs, allowing for optimization of memory usage and execution speed based on specific model architectures and hardware constraints.

Highlights

  • New Configuration Option: Introduced a new configuration option, sequential_targets_per_subgraph, to control the number of sequential targets grouped into a single subgraph during model tracing.
  • Core Tracing Logic Update: Modified core tracing utilities (trace_subgraphs, topological_partition) to accept and utilize this new parameter, enabling more flexible subgraph generation based on user-defined grouping.
  • Pipeline and Debug Script Integration: Integrated the new subgraph grouping capability into the sequential pipeline and the tracing debug script, allowing users to leverage this feature for optimized model processing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new sequential_targets_per_subgraph parameter, allowing users to control the number of sequential targets grouped into each subgraph during model tracing. The changes involve adding this parameter to DatasetArguments, trace_subgraphs, and topological_partition functions, and modifying the partitioning logic accordingly. Review comments highlight a critical TypeError bug where the DatasetArguments field object is passed instead of its value, a potential ZeroDivisionError if targets_per_subgraph is non-positive, and several style and formatting inconsistencies (docstring, comments, function call alignment, line length) that need to be addressed for improved code quality and readability.

trust_remote_code=args.trust_remote_code,
skip_weights=args.skip_weights,
device_map=args.device_map,
targets_per_subgraph=DatasetArguments.sequential_targets_per_subgraph
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This is passing the field object from the DatasetArguments dataclass instead of the integer value from the parsed command-line arguments. This will cause a TypeError at runtime. You should use args.targets_per_subgraph to get the value provided by the user.

Suggested change
targets_per_subgraph=DatasetArguments.sequential_targets_per_subgraph
targets_per_subgraph=args.targets_per_subgraph

Comment on lines +102 to +103
:param targets_per_subgraph: number of targets to include per subgraph

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring has an extra blank line after the new parameter, which is inconsistent with the rest of the docstring formatting in this file.

Suggested change
:param targets_per_subgraph: number of targets to include per subgraph
:param targets_per_subgraph: number of targets to include per subgraph

for node in graph.graph.nodes
}
partition_index = 0 # global counter
targets_seen = 0 # global counter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment doesn't follow PEP 8 spacing guidelines (a space is required after '#'). Also, the comment could be more descriptive of what targets_seen tracks.

Suggested change
targets_seen = 0 # global counter
targets_seen = 0 # number of targets encountered so far

partitions.append([])
targets_seen += 1

if(targets_seen % targets_per_subgraph == 0):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This if statement has a style issue: the condition is unnecessarily wrapped in parentheses. According to PEP 8, parentheses are not needed for if conditions.

Suggested change
if(targets_seen % targets_per_subgraph == 0):
if targets_seen % targets_per_subgraph == 0:

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

… per subgraph

Signed-off-by: Ayush <aayush7511@gmail.com>
Signed-off-by: Ayush <aayush7511@gmail.com>
@aayush7511 aayush7511 force-pushed the feat/targets-per-subgraph branch from ecb7723 to 50e8b9a Compare March 20, 2026 03:46
@aayush7511 aayush7511 marked this pull request as ready for review March 20, 2026 05:43
Copy link
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Please consider adding some tests to verify that each subgraph has the submodules you expect (see Subgraph.submodules).

}
partition_index = 0 # global counter

targets_seen = 0 # number of targets encountered so far
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
targets_seen = 0 # number of targets encountered so far
targets_seen = 0 # number of targets encountered so far

assert graph_is_well_formed(graph.graph)
target_nodes = find_target_nodes(graph, targets)

if(targets_per_subgraph <= 0):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if(targets_per_subgraph <= 0):
if targets_per_subgraph <= 0:

@aayush7511
Copy link
Author

Hi @kylesayrs ,

To test if each submodule has the necessary targets, I'm generating a subgraph using trace_subgraph. Currently trace_subgraph requires model.config in one of its lines to run.

src/llmcompressor/pipelines/sequential/helpers.py

with contextlib.ExitStack() as stack:
        # calibration context
        stack.enter_context(calibration_forward_context(model))
        stack.enter_context(HooksMixin.disable_hooks())

        # flags useful for tracing
        stack.enter_context(patch_attr(model.config, "_attn_implementation", "eager"))
        stack.enter_context(patch_attr(torch.compiler, "_is_compiling_flag", True))

        # autowrap forwards
        stack.enter_context(autowrap_forwards(ancestors, ignore))

With the way the current tests are setup, the models are simple torch.nn.module layers with no config. What model would you recommend using or should I create a model with a dummy config file.

tests/llmcompressor/pipelines/sequential/test_helpers.py

class DummyModel(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.seq = torch.nn.Sequential(torch.nn.Linear(10, 20), torch.nn.ReLU())
        self.fc = torch.nn.Linear(20, 5)

    def forward(self, x):
        x = self.seq(x)
        return self.fc(x)

class DummyModelMultipleSequentialLayers(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.layer1 = torch.nn.Linear(10, 10)
        self.layer2 = torch.nn.Linear(10, 10)
        self.layer3 = torch.nn.Linear(10, 10)
        self.layer4 = torch.nn.Linear(10, 10)
        self.layer5 = torch.nn.Linear(10, 10)
        self.layer6 = torch.nn.Linear(10, 10)
        
    
    def forward(self, x):
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
        x = self.layer5(x)
        x = self.layer6(x)
        return x

@kylesayrs
Copy link
Collaborator

@aayush7511 I recommend using Qwen/Qwen3-0.6B on the meta device, similar to these tests.

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution! lgtm pending open comments

@mergify
Copy link
Contributor

mergify bot commented Mar 24, 2026

The quality checks have failed. Please run make style and make quality under
the root directory to adddress the lint failures. You will need to install the
dev optional install to get the required linting packages:
https://github.com/vllm-project/llm-compressor/blob/main/CONTRIBUTING.md

@kylesayrs kylesayrs added the ready When a PR is ready for review label Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

quality-failed ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Allow targeting multiples of sequential targets

3 participants