Skip to content

Conversation

@zhangbilang
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @SmudgedWings, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on updating the token reduction mechanisms (DART, FastV, VisPruner, and SparseVLM) to support the LLaVA 1.6 model architecture. The changes involve adapting the token handling logic to LLaVA 1.6's specific input processing, particularly for vision tokens and multi-resolution images, and refining the overall implementation for improved flexibility and maintainability.

Highlights

  • LLaVA 1.6 Compatibility: The core token reduction methods (DART, FastV, and VisPruner) have been updated to ensure full compatibility with LLaVA 1.6 models. This involves adapting how vision token lengths are determined and how image features are processed within these models.
  • Dynamic Vision Token Handling: The configuration for SparseVLM now uses a reduction_ratio instead of a fixed number of retained_tokens. Furthermore, the vision_token_length for LLaVA 1.6 models is now dynamically extracted by hooking into the model's prepare_inputs_labels_for_multimodal function, providing more flexible and accurate token reduction.
  • Enhanced Image Processing for VisPruner: VisPruner has been significantly enhanced to support advanced image merging types (e.g., 'spatial_unpad', 'anyres') and various image aspect ratios. New utility functions have been introduced to handle complex tasks like selecting the best resolution, calculating image patch grid shapes, and unpadding images, which are crucial for LLaVA 1.6's diverse image inputs.
  • Code Refinements and Cleanup: General code quality improvements have been made across the codebase, including standardizing parameter naming from pruning_pars to pruning_paras, removing unused imports, and streamlining the registration of various model hooks for cleaner and more maintainable code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates several token reduction methods (DART, FastV, VisPruner, SparseVLM) to support LLaVA 1.6, particularly its "any-resolution" image feature. This involves significant changes to handle dynamic vision token lengths and new image processing logic. The changes are mostly well-implemented, but there are opportunities for improvement in code reuse, style adherence, and error handling.

Comment on lines +172 to +179
num_patch_width, num_patch_height = get_anyres_image_grid_shape(
image_sizes[image_idx],
model_config.image_grid_pinpoints,
vision_tower_image_size
)
except Exception:
num_patch_width, num_patch_height = 2, 2
image_feature = image_feature.view(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using a broad except Exception: can hide unexpected errors and make debugging more difficult. It's better to catch only the specific exceptions that you anticipate get_anyres_image_grid_shape might raise.

By inspecting get_anyres_image_grid_shape, potential exceptions include AssertionError, IndexError, and ValueError. Catching these specific exceptions will make the code more robust and prevent masking of other potential bugs.

                            try:
                                num_patch_width, num_patch_height = get_anyres_image_grid_shape(
                                    image_sizes[image_idx],
                                    model_config.image_grid_pinpoints,
                                    vision_tower_image_size
                                )
                            except (AssertionError, IndexError, ValueError):
                                num_patch_width, num_patch_height = 2, 2

Comment on lines +92 to +98
if self.model.__class__.__name__ == 'Llava':
self.model.vlm_model.prepare_inputs_labels_for_multimodal = MethodType(
self.vtoken_length_for_llava_hook(
self.model.vlm_model.prepare_inputs_labels_for_multimodal,
self.pruning_paras
), self.model.vlm_model
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for handling LLaVA models by monkey-patching prepare_inputs_labels_for_multimodal is also present in llmc/compression/token_reduction/fastv.py. This code duplication can lead to maintenance issues if this logic needs to be updated in the future.

To improve maintainability, consider refactoring this logic into a shared method in the TokenReductionModule base class. This would likely involve:

  1. Moving the vtoken_length_hook function (which is also duplicated) to token_reduction_module.py or a shared utility file.
  2. Creating a new method in TokenReductionModule, for example _register_vtoken_length_hook, that contains this if/else block.
  3. Calling this new method from the register_reduction_modules method of both DART and FastV subclasses.

# Multiply all elements by patch_size
grid_pinpoints = [[dim * patch_size for dim in pair] for pair in grid_pinpoints]
if type(grid_pinpoints) is list:
possible_resolutions = grid_pinpoints

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For type checking, it is more idiomatic and robust to use isinstance() rather than type(). isinstance() correctly handles subclasses, which makes the code more flexible.

As per PEP 8: "Object type comparisons should always use isinstance() instead of direct type comparisons."

Suggested change
possible_resolutions = grid_pinpoints
if isinstance(possible_resolutions, list):

@llmc-reviewer llmc-reviewer merged commit 774e2c5 into ModelTC:main Jul 27, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants