Skip to content

Conversation

@zhangbilang
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @SmudgedWings, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on adapting the sparsevlm token reduction module to be fully compatible with LLaVA-1.6 models. It enhances the mechanism for handling vision token lengths specific to LLaVA-1.6 and introduces a more dynamic, ratio-based token reduction strategy to optimize performance and efficiency.

Highlights

  • LLaVA-1.6 Compatibility: I've updated the input_hook_llava function to correctly extract the vision_token_length from LLaVA-1.6's prepare_inputs_labels_for_multimodal output, ensuring proper integration with the newer model version. This includes passing a llava_next flag to enable specific LLaVA-1.6 handling.
  • Dynamic Token Reduction: New sparse token lists (640, 320, 160) and corresponding logic have been introduced to sparse_token_dict. The attn_postprocess_topk function now accepts a reduction_ratio, allowing for more flexible and dynamic pruning strategies where sparse token values can be interpreted as ratios of the total vision tokens.
  • Code Refinement: The function register_module_pars has been consistently renamed to register_module_paras across the codebase for improved clarity and adherence to naming conventions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates sparsevlm to support LLaVA 1.6. The changes introduce new logic to handle dynamic vision token lengths and new configurations for token reduction. A potential KeyError in attn_postprocess_topk due to a hard dependency on specific reduction_ratio values was identified. Also, magic numbers for token reduction ratios were introduced without explanation, which impacts code clarity and maintainability.

Comment on lines +494 to +495
else:
sparse_token_list = sparse_token_dict[round((1 - reduction_ratio) * 2880)]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This logic is brittle and can lead to a KeyError at runtime. The number 2880 is a magic number, which hurts readability and maintainability.

I suggest refactoring this block to be more robust by finding the closest available key.

    if retained_tokens in [192, 128, 64]:
        sparse_token_list = sparse_token_dict[retained_tokens]
    else:
        # TODO: Replace 2880 with a named constant e.g. LLAVA_1_6_MAX_VISION_TOKENS
        target_key = round((1 - reduction_ratio) * 2880)
        available_new_keys = [160, 320, 640]
        # Find the closest key to handle any reduction_ratio and prevent KeyErrors
        closest_key = min(available_new_keys, key=lambda k: abs(k - target_key))
        sparse_token_list = sparse_token_dict[closest_key]

Comment on lines +440 to +442
sparse_token_list_640 = [0.1979]
sparse_token_list_320 = [0.0833]
sparse_token_list_160 = [0.0261]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The newly added floating-point values for sparse_token_list_* are magic numbers. Their purpose and origin are not clear from the code, which harms readability and maintainability.

Please add comments to explain what these ratios represent and how they were determined. For example:

# Ratios for prune_flag=True, merge_flag=True, derived from internal experiments
sparse_token_list_640 = [0.1979]
# ...

This will help future developers understand and modify this code.

@llmc-reviewer llmc-reviewer merged commit 93afb24 into ModelTC:main Jul 27, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants