[Feature] Allow targeting multiples of sequential targets#2493
[Feature] Allow targeting multiples of sequential targets#2493aayush7511 wants to merge 2 commits intovllm-project:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant enhancement to the model tracing functionality by enabling the grouping of multiple sequential targets into a single subgraph. This new capability provides users with finer control over the granularity of subgraphs, allowing for optimization of memory usage and execution speed based on specific model architectures and hardware constraints. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new sequential_targets_per_subgraph parameter, allowing users to control the number of sequential targets grouped into each subgraph during model tracing. The changes involve adding this parameter to DatasetArguments, trace_subgraphs, and topological_partition functions, and modifying the partitioning logic accordingly. Review comments highlight a critical TypeError bug where the DatasetArguments field object is passed instead of its value, a potential ZeroDivisionError if targets_per_subgraph is non-positive, and several style and formatting inconsistencies (docstring, comments, function call alignment, line length) that need to be addressed for improved code quality and readability.
| trust_remote_code=args.trust_remote_code, | ||
| skip_weights=args.skip_weights, | ||
| device_map=args.device_map, | ||
| targets_per_subgraph=DatasetArguments.sequential_targets_per_subgraph |
There was a problem hiding this comment.
This is passing the field object from the DatasetArguments dataclass instead of the integer value from the parsed command-line arguments. This will cause a TypeError at runtime. You should use args.targets_per_subgraph to get the value provided by the user.
| targets_per_subgraph=DatasetArguments.sequential_targets_per_subgraph | |
| targets_per_subgraph=args.targets_per_subgraph |
| :param targets_per_subgraph: number of targets to include per subgraph | ||
|
|
There was a problem hiding this comment.
| for node in graph.graph.nodes | ||
| } | ||
| partition_index = 0 # global counter | ||
| targets_seen = 0 # global counter |
There was a problem hiding this comment.
| partitions.append([]) | ||
| targets_seen += 1 | ||
|
|
||
| if(targets_seen % targets_per_subgraph == 0): |
There was a problem hiding this comment.
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
… per subgraph Signed-off-by: Ayush <aayush7511@gmail.com>
Signed-off-by: Ayush <aayush7511@gmail.com>
ecb7723 to
50e8b9a
Compare
kylesayrs
left a comment
There was a problem hiding this comment.
Looks great! Please consider adding some tests to verify that each subgraph has the submodules you expect (see Subgraph.submodules).
| } | ||
| partition_index = 0 # global counter | ||
|
|
||
| targets_seen = 0 # number of targets encountered so far |
There was a problem hiding this comment.
| targets_seen = 0 # number of targets encountered so far | |
| targets_seen = 0 # number of targets encountered so far |
| assert graph_is_well_formed(graph.graph) | ||
| target_nodes = find_target_nodes(graph, targets) | ||
|
|
||
| if(targets_per_subgraph <= 0): |
There was a problem hiding this comment.
| if(targets_per_subgraph <= 0): | |
| if targets_per_subgraph <= 0: |
|
Hi @kylesayrs , To test if each submodule has the necessary targets, I'm generating a subgraph using trace_subgraph. Currently trace_subgraph requires model.config in one of its lines to run.
with contextlib.ExitStack() as stack:
# calibration context
stack.enter_context(calibration_forward_context(model))
stack.enter_context(HooksMixin.disable_hooks())
# flags useful for tracing
stack.enter_context(patch_attr(model.config, "_attn_implementation", "eager"))
stack.enter_context(patch_attr(torch.compiler, "_is_compiling_flag", True))
# autowrap forwards
stack.enter_context(autowrap_forwards(ancestors, ignore))With the way the current tests are setup, the models are simple torch.nn.module layers with no config. What model would you recommend using or should I create a model with a dummy config file.
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.seq = torch.nn.Sequential(torch.nn.Linear(10, 20), torch.nn.ReLU())
self.fc = torch.nn.Linear(20, 5)
def forward(self, x):
x = self.seq(x)
return self.fc(x)
class DummyModelMultipleSequentialLayers(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(10, 10)
self.layer2 = torch.nn.Linear(10, 10)
self.layer3 = torch.nn.Linear(10, 10)
self.layer4 = torch.nn.Linear(10, 10)
self.layer5 = torch.nn.Linear(10, 10)
self.layer6 = torch.nn.Linear(10, 10)
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.layer5(x)
x = self.layer6(x)
return x |
|
@aayush7511 I recommend using |
brian-dellabetta
left a comment
There was a problem hiding this comment.
Thanks for the contribution! lgtm pending open comments
|
The quality checks have failed. Please run |
SUMMARY:
Closes #2481
TEST PLAN:
"please outline how the changes were tested"