Skip to content

Conversation

@chengtao-lv
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @chengtao-lv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the llmc library's token reduction capabilities by introducing two new methods, DivPrune and MustDrop. These methods are designed to optimize Vision-Language Models by intelligently reducing the number of visual tokens, which can lead to improved inference efficiency and reduced computational overhead.

Highlights

  • New Token Reduction Methods: Introduced two novel token reduction algorithms, DivPrune and MustDrop, expanding the capabilities for optimizing Vision-Language Models (VLMs) within the llmc framework.
  • DivPrune Implementation: DivPrune is a diversity-preserving pruning method for visual tokens. It identifies and removes redundant visual features based on cosine similarity, integrated as a post-hook for Llava models to modify inputs after multimodal preparation.
  • MustDrop Implementation: MustDrop is a spatial merging technique for visual tokens. It groups and averages similar tokens within defined windows, integrated as a pre-hook into the forward pass of vision model blocks to reduce token count early in the processing pipeline.
  • Module Integration: Both DivPrune and MustDrop are registered as TokenReductionModules within the llmc framework, making them discoverable and configurable for VLM compression tasks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the DivPrune and MustDrop token reduction methods. The review focuses on correctness, maintainability, and robustness. Key issues addressed include potential division-by-zero, incorrect batch handling, type hint mismatches, hardcoded values, and code clarity.

SYS_TOKEN_LEN = pruning_paras['image_token_start_index']
img_feature_len = pruning_paras['image_token_length']
device = inputs_embeds.device
visual_tokens = inputs_embeds[0][SYS_TOKEN_LEN: SYS_TOKEN_LEN + img_feature_len]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The implementation hardcodes the batch index [0], processing only the first sample in a batch. Refactor to correctly handle batches, iterating over the batch dimension or using vectorized operations. This issue is also present in input_hook_llava on line 129.



def pairwise_cosine_similarity(matrix):
norm_matrix = matrix / matrix.norm(dim=1, keepdim=True)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The normalization can lead to division by zero if a row in the matrix is all zeros, resulting in NaN values. Adding a small epsilon to the denominator prevents this.

Suggested change
norm_matrix = matrix / matrix.norm(dim=1, keepdim=True)
norm_matrix = matrix / (matrix.norm(dim=1, keepdim=True) + 1e-8)

if len(args) == 0:
return fn(*args, **kwargs)
input_args = args[0]
if hasattr(input_args[0], 'shape') and input_args[0].shape[0] == 1:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The condition input_args[0].shape[0] == 1 checks sequence length, not batch size. Check the batch dimension of input_args (which is input_ids) instead.

Suggested change
if hasattr(input_args[0], 'shape') and input_args[0].shape[0] == 1:
if input_args.shape[0] == 1:

# --- adaptive section ---#

n_B, n_H = similarity_map.shape
node_mean = torch.tensor(threshold).cuda(sims.device)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Replace hardcoded .cuda() with a device-agnostic approach using the device argument in the tensor constructor.

Suggested change
node_mean = torch.tensor(threshold).cuda(sims.device)
node_mean = torch.tensor(threshold, device=sims.device)

)

def merge(x: torch.Tensor, mode='mean') -> torch.Tensor:
# TODO: num_token_window can be undefined

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The TODO comment indicates a potential bug where num_token_window could be undefined. Resolve the underlying issue or remove the comment if it's no longer relevant.

Comment on lines +22 to +27
import math
from typing import Callable, Tuple

import numpy as np
import torch.nn.functional as F
from einops import rearrange

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Imports should be at the top of the file. Move these imports to the module's top level.

feat: torch.Tensor,
threshold: float,
window_size: Tuple[int, int],
) -> Tuple[Callable, Callable]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

conditional_pooling is type-hinted to return a Tuple[Callable, Callable], but returns only a single callable (merge). Correct the return type.

Suggested change
) -> Tuple[Callable, Callable]:
) -> Callable:


with torch.no_grad():

ws_h, ws_w = int(window_size[0]), int(window_size[1]) # 窗口尺寸,2*2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Translate comments to English.

Suggested change
ws_h, ws_w = int(window_size[0]), int(window_size[1]) # 窗口尺寸,2*2
ws_h, ws_w = int(window_size[0]), int(window_size[1]) # window size, e.g., 2x2

window_size = pruning_paras['window_size']
hidden_states = args[0]
merge = conditional_pooling(hidden_states, spatial_threshold, window_size)
hidden_states, size = merge_wavg(merge, hidden_states, None)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The size variable returned by merge_wavg is unused. Assign it to _ to signal that it's intentionally ignored.

Suggested change
hidden_states, size = merge_wavg(merge, hidden_states, None)
hidden_states, _ = merge_wavg(merge, hidden_states, None)


self.model.set_modality('vision')
self.model.find_blocks()
self.model.blocks[1].register_forward_pre_hook(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The block index 1 is hardcoded. Make this a configurable parameter, e.g., by reading it from self.pruning_paras.

@helloyongyang helloyongyang merged commit 05eedff into main Jul 12, 2025
2 checks passed
@helloyongyang helloyongyang deleted the vlm branch July 12, 2025 14:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants