Skip to content

Conversation

@chengtao-lv
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @chengtao-lv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request adds the FastVID token reduction method, a novel approach to optimize the processing of video data within Vision-Language Models. It leverages dynamic segmentation, spatio-temporal pruning, and density-based token merging to reduce computational overhead while preserving critical information, enhancing the efficiency of VLM inference on video inputs.

Highlights

  • New Token Reduction Method: I've introduced a new token reduction method called FastVID specifically designed for Vision-Language Models (VLMs) to efficiently process video inputs.
  • Dynamic Video Token Processing: The FastVID implementation incorporates advanced techniques such as Dynamic Segmentation (DySeg) for segmenting video frames, Spatio-Temporal Pruning (STPrune) for selecting salient tokens, and Density-based Token Merging (DTM) for aggregating contextual information, optimizing token usage.
  • Llava OneVision Integration: The FastVID method is integrated with Llava OneVision models through strategic forward hooks, allowing it to modify hidden states, attention masks, and position embeddings during inference for improved efficiency.
  • Configuration Support: A new YAML configuration file (fastvid.yml) has been added to define and control the parameters for the FastVID method, including retention ratio and specific parameters for DySeg, STPrune, and DTM.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The code introduces a new token reduction method called FastVID, which incorporates DySeg and STPrune techniques for video processing. It includes necessary configurations and hook functions to integrate with the Llava architecture. The review focuses on potential numerical instability, redundant operations, and potential duplicate indices.

Comment on lines +278 to +282
cut_indices = (
torch.unique(torch.cat([cut_indices_topk, cut_indices_cos]))
.sort()
.values
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding a check to ensure that cut_indices_topk and cut_indices_cos do not contain duplicate indices before concatenating them. This could prevent unexpected behavior if the same index is selected by both methods.

        cut_indices_topk = torch.topk(
            similarity_matrix, pruning_paras['DySeg_c'] - 1, largest=False
        ).indices
        cut_indices_cos = torch.nonzero(
            similarity_matrix < pruning_paras['DySeg_tau'], as_tuple=False
        ).squeeze(1)
        # Ensure no duplicates before concatenation
        cut_indices_topk = torch.unique(cut_indices_topk)
        cut_indices_cos = torch.unique(cut_indices_cos)
        cut_indices = (
            torch.unique(torch.cat([cut_indices_topk, cut_indices_cos]))
            .sort()
            .values
        )

Comment on lines +375 to +380
density = (-(dist_nearest**2).mean(dim=-1)).exp()
density = (
density
+ torch.rand(density.shape, device=device_type, dtype=density.dtype)
* 1e-6
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding a small constant to the density before exponentiation might prevent numerical instability issues if dist_nearest is zero. Consider using torch.clamp to ensure density is within a reasonable range.

        density = (-(dist_nearest**2).mean(dim=-1)).exp()
        density = torch.clamp(density, min=1e-9) # avoid zero values
        density = (
            density
            + torch.rand(density.shape, device=device_type, dtype=density.dtype)
            * 1e-6
        )

Comment on lines +558 to +564
with torch.inference_mode():
kwargs['position_ids'].resize_as_(keep_indexs.unsqueeze(0)).copy_(
keep_indexs.unsqueeze(0).clone()
)
kwargs['cache_position'].resize_as_(keep_indexs).copy_(
keep_indexs.clone()
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .clone() calls after the .copy_() operations might be redundant. The .copy_() operation already copies the data, so cloning the copied tensor might not be necessary unless you intend to modify the original tensors later. If the original tensors are not meant to be modified, consider removing the .clone() calls for efficiency.

                with torch.inference_mode():
                    kwargs['position_ids'].resize_as_(keep_indexs.unsqueeze(0)).copy_(keep_indexs.unsqueeze(0))
                    kwargs['cache_position'].resize_as_(keep_indexs).copy_(keep_indexs)

                    new_pe0 = position_embeddings[0][:, keep_indexs, :].clone()
                    new_pe1 = position_embeddings[1][:, keep_indexs, :].clone()
                    position_embeddings[0].resize_as_(new_pe0).copy_(new_pe0)
                    position_embeddings[1].resize_as_(new_pe0).copy_(new_pe1)

@gushiqiao gushiqiao merged commit 68e00c5 into main Jun 26, 2025
2 checks passed
@gushiqiao gushiqiao deleted the vlm branch June 26, 2025 11:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants