-
Notifications
You must be signed in to change notification settings - Fork 66
update sparsevlm for llava1.6 #427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @SmudgedWings, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request primarily focuses on adapting the sparsevlm token reduction module to be fully compatible with LLaVA-1.6 models. It enhances the mechanism for handling vision token lengths specific to LLaVA-1.6 and introduces a more dynamic, ratio-based token reduction strategy to optimize performance and efficiency.
Highlights
- LLaVA-1.6 Compatibility: I've updated the
input_hook_llavafunction to correctly extract thevision_token_lengthfrom LLaVA-1.6'sprepare_inputs_labels_for_multimodaloutput, ensuring proper integration with the newer model version. This includes passing allava_nextflag to enable specific LLaVA-1.6 handling. - Dynamic Token Reduction: New sparse token lists (640, 320, 160) and corresponding logic have been introduced to
sparse_token_dict. Theattn_postprocess_topkfunction now accepts areduction_ratio, allowing for more flexible and dynamic pruning strategies where sparse token values can be interpreted as ratios of the total vision tokens. - Code Refinement: The function
register_module_parshas been consistently renamed toregister_module_parasacross the codebase for improved clarity and adherence to naming conventions.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates sparsevlm to support LLaVA 1.6. The changes introduce new logic to handle dynamic vision token lengths and new configurations for token reduction. A potential KeyError in attn_postprocess_topk due to a hard dependency on specific reduction_ratio values was identified. Also, magic numbers for token reduction ratios were introduced without explanation, which impacts code clarity and maintainability.
| else: | ||
| sparse_token_list = sparse_token_dict[round((1 - reduction_ratio) * 2880)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic is brittle and can lead to a KeyError at runtime. The number 2880 is a magic number, which hurts readability and maintainability.
I suggest refactoring this block to be more robust by finding the closest available key.
if retained_tokens in [192, 128, 64]:
sparse_token_list = sparse_token_dict[retained_tokens]
else:
# TODO: Replace 2880 with a named constant e.g. LLAVA_1_6_MAX_VISION_TOKENS
target_key = round((1 - reduction_ratio) * 2880)
available_new_keys = [160, 320, 640]
# Find the closest key to handle any reduction_ratio and prevent KeyErrors
closest_key = min(available_new_keys, key=lambda k: abs(k - target_key))
sparse_token_list = sparse_token_dict[closest_key]| sparse_token_list_640 = [0.1979] | ||
| sparse_token_list_320 = [0.0833] | ||
| sparse_token_list_160 = [0.0261] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The newly added floating-point values for sparse_token_list_* are magic numbers. Their purpose and origin are not clear from the code, which harms readability and maintainability.
Please add comments to explain what these ratios represent and how they were determined. For example:
# Ratios for prune_flag=True, merge_flag=True, derived from internal experiments
sparse_token_list_640 = [0.1979]
# ...This will help future developers understand and modify this code.
No description provided.