-
Notifications
You must be signed in to change notification settings - Fork 66
dart,fatev,vispruner for llava1.6,update sparsevlm #426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @SmudgedWings, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request primarily focuses on updating the token reduction mechanisms (DART, FastV, VisPruner, and SparseVLM) to support the LLaVA 1.6 model architecture. The changes involve adapting the token handling logic to LLaVA 1.6's specific input processing, particularly for vision tokens and multi-resolution images, and refining the overall implementation for improved flexibility and maintainability.
Highlights
- LLaVA 1.6 Compatibility: The core token reduction methods (DART, FastV, and VisPruner) have been updated to ensure full compatibility with LLaVA 1.6 models. This involves adapting how vision token lengths are determined and how image features are processed within these models.
- Dynamic Vision Token Handling: The configuration for SparseVLM now uses a
reduction_ratioinstead of a fixed number ofretained_tokens. Furthermore, thevision_token_lengthfor LLaVA 1.6 models is now dynamically extracted by hooking into the model'sprepare_inputs_labels_for_multimodalfunction, providing more flexible and accurate token reduction. - Enhanced Image Processing for VisPruner: VisPruner has been significantly enhanced to support advanced image merging types (e.g., 'spatial_unpad', 'anyres') and various image aspect ratios. New utility functions have been introduced to handle complex tasks like selecting the best resolution, calculating image patch grid shapes, and unpadding images, which are crucial for LLaVA 1.6's diverse image inputs.
- Code Refinements and Cleanup: General code quality improvements have been made across the codebase, including standardizing parameter naming from
pruning_parstopruning_paras, removing unused imports, and streamlining the registration of various model hooks for cleaner and more maintainable code.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates several token reduction methods (DART, FastV, VisPruner, SparseVLM) to support LLaVA 1.6, particularly its "any-resolution" image feature. This involves significant changes to handle dynamic vision token lengths and new image processing logic. The changes are mostly well-implemented, but there are opportunities for improvement in code reuse, style adherence, and error handling.
| num_patch_width, num_patch_height = get_anyres_image_grid_shape( | ||
| image_sizes[image_idx], | ||
| model_config.image_grid_pinpoints, | ||
| vision_tower_image_size | ||
| ) | ||
| except Exception: | ||
| num_patch_width, num_patch_height = 2, 2 | ||
| image_feature = image_feature.view( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a broad except Exception: can hide unexpected errors and make debugging more difficult. It's better to catch only the specific exceptions that you anticipate get_anyres_image_grid_shape might raise.
By inspecting get_anyres_image_grid_shape, potential exceptions include AssertionError, IndexError, and ValueError. Catching these specific exceptions will make the code more robust and prevent masking of other potential bugs.
try:
num_patch_width, num_patch_height = get_anyres_image_grid_shape(
image_sizes[image_idx],
model_config.image_grid_pinpoints,
vision_tower_image_size
)
except (AssertionError, IndexError, ValueError):
num_patch_width, num_patch_height = 2, 2| if self.model.__class__.__name__ == 'Llava': | ||
| self.model.vlm_model.prepare_inputs_labels_for_multimodal = MethodType( | ||
| self.vtoken_length_for_llava_hook( | ||
| self.model.vlm_model.prepare_inputs_labels_for_multimodal, | ||
| self.pruning_paras | ||
| ), self.model.vlm_model | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for handling LLaVA models by monkey-patching prepare_inputs_labels_for_multimodal is also present in llmc/compression/token_reduction/fastv.py. This code duplication can lead to maintenance issues if this logic needs to be updated in the future.
To improve maintainability, consider refactoring this logic into a shared method in the TokenReductionModule base class. This would likely involve:
- Moving the
vtoken_length_hookfunction (which is also duplicated) totoken_reduction_module.pyor a shared utility file. - Creating a new method in
TokenReductionModule, for example_register_vtoken_length_hook, that contains thisif/elseblock. - Calling this new method from the
register_reduction_modulesmethod of bothDARTandFastVsubclasses.
| # Multiply all elements by patch_size | ||
| grid_pinpoints = [[dim * patch_size for dim in pair] for pair in grid_pinpoints] | ||
| if type(grid_pinpoints) is list: | ||
| possible_resolutions = grid_pinpoints |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For type checking, it is more idiomatic and robust to use isinstance() rather than type(). isinstance() correctly handles subclasses, which makes the code more flexible.
As per PEP 8: "Object type comparisons should always use isinstance() instead of direct type comparisons."
| possible_resolutions = grid_pinpoints | |
| if isinstance(possible_resolutions, list): |
No description provided.