Skip to content

[Tracing] Decouple vision tower from first layer #1710

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kylesayrs
Copy link
Collaborator

Purpose

  • Some models such as CommandA have very large vision towers. Until tensor parallelism is implemented or tracing expands to support vision towers, we must onload the entire vision tower as one sequential target. This can use a lot of memory. This PR slightly reduces that memory requirement by onloading the first decoder layer separately from the vision tower

Changes

  • Assign all operations which proceed the first sequential target (such as the vision tower) to their graph partition

Signed-off-by: Kyle Sayers <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented changes to reduce the memory footprint for models with large vision towers, such as CommandA. The core idea is to decouple the first decoder layer from the main vision tower during the tracing process. This is achieved by refining how operations are assigned to graph partitions, ensuring that all operations preceding a sequential target are correctly grouped within their respective partitions before a new partition is initiated. This adjustment helps manage memory more efficiently, especially when dealing with very large model components.

Highlights

  • Memory Optimization for Vision Towers: Modified the graph partitioning logic to allow for better memory management, specifically for large vision towers in models, by ensuring preceding operations are grouped correctly.
  • Refined Topological Partitioning: Adjusted the topological_partition function to reorder how nodes are assigned to partitions and when new partitions are created, preventing empty initial partitions and correctly assigning pre-target operations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

github-actions bot commented Aug 6, 2025

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to reduce memory usage for models with large vision towers by decoupling the vision tower from the first decoder layer during tracing. The core change is in topological_partition where the logic is modified to create a new graph partition before a sequential target node is processed, rather than after. This effectively separates the target node from its predecessors. The logic appears sound and correctly implements the intended behavior. I have one suggestion to improve code readability.

@kylesayrs kylesayrs added the ready When a PR is ready for review label Aug 7, 2025
Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat! nice and simple, basically if it's not in the list of partitions it will stay on original device? do we need to validate anything for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready When a PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants