Skip to content

Conversation

@shihaobai
Copy link
Collaborator

No description provided.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @shihaobai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily addresses an issue with how the get_vocab_size utility function extracts vocabulary size from model configuration files, specifically enhancing its ability to handle configurations where the vocab_size is nested under an llm_config key, which is common for multimodal models. Additionally, it includes a minor adjustment to an FP8 quantization kernel.

Highlights

  • Enhanced get_vocab_size compatibility: The get_vocab_size utility function in lightllm/utils/config_utils.py has been updated to correctly retrieve the vocabulary size from model configuration files that might store it under a nested llm_config key. This ensures compatibility with multimodal models or other configurations where vocab_size is not directly at the top level.
  • FP8 Quantization Kernel Adjustment: In lightllm/common/quantization/triton_quant/fp8/fp8act_quant_kernel.py, an additional False boolean argument was passed to the sgl_ops.sgl_per_token_group_quant_fp8 function call. This is likely a minor adjustment to the FP8 activation quantization process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@shihaobai shihaobai merged commit 44f3ff4 into main Jul 24, 2025
1 check passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue with retrieving vocab_size for multimodal models and includes an update to a quantization kernel function call. The review focuses on improving code maintainability by reducing duplication and enhancing clarity by addressing hardcoded values.

finfo = torch.finfo(dtype)
fp8_max, fp8_min = finfo.max, finfo.min
sgl_ops.sgl_per_token_group_quant_fp8(x, x_q, x_s, group_size, 1e-10, fp8_min, fp8_max)
sgl_ops.sgl_per_token_group_quant_fp8(x, x_q, x_s, group_size, 1e-10, fp8_min, fp8_max, False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The eps value is hardcoded as 1e-10 in the function call, but the function already receives an eps parameter. It's better to use the provided parameter to make the function more flexible and to respect the function's contract. Also, the False argument is a "magic value" which makes the code harder to understand without context. Please consider adding a comment explaining its purpose, or use a named argument if the sgl_ops API supports it.

sgl_ops.sgl_per_token_group_quant_fp8(x, x_q, x_s, group_size, eps, fp8_min, fp8_max, False) # eps from function param

Comment on lines +47 to +49
if "llm_config" in config_json:
vocab_size = int(config_json["llm_config"]["vocab_size"])
return vocab_size
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This new block of code duplicates the logic for extracting and casting vocab_size that exists in lines 50-52. This code duplication can make future maintenance more difficult, as any changes would need to be applied in two places. To improve this, you could refactor the function to first select the correct configuration dictionary and then apply the vocab_size extraction logic just once.

config_json = get_config_json(model_path)

    # Select the right config dictionary
    if "llm_config" in config_json:
        config_json = config_json["llm_config"]

    # Extract vocab_size from the selected config
    vocab_size = config_json["vocab_size"]
    if not isinstance(vocab_size, int):
        vocab_size = int(vocab_size)
    return vocab_size

@shihaobai shihaobai deleted the vocab branch July 24, 2025 06:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants