-
Notifications
You must be signed in to change notification settings - Fork 295
fix get_vocab_size for multimodal #984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @shihaobai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request primarily addresses an issue with how the get_vocab_size utility function extracts vocabulary size from model configuration files, specifically enhancing its ability to handle configurations where the vocab_size is nested under an llm_config key, which is common for multimodal models. Additionally, it includes a minor adjustment to an FP8 quantization kernel.
Highlights
- Enhanced
get_vocab_sizecompatibility: Theget_vocab_sizeutility function inlightllm/utils/config_utils.pyhas been updated to correctly retrieve the vocabulary size from model configuration files that might store it under a nestedllm_configkey. This ensures compatibility with multimodal models or other configurations wherevocab_sizeis not directly at the top level. - FP8 Quantization Kernel Adjustment: In
lightllm/common/quantization/triton_quant/fp8/fp8act_quant_kernel.py, an additionalFalseboolean argument was passed to thesgl_ops.sgl_per_token_group_quant_fp8function call. This is likely a minor adjustment to the FP8 activation quantization process.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an issue with retrieving vocab_size for multimodal models and includes an update to a quantization kernel function call. The review focuses on improving code maintainability by reducing duplication and enhancing clarity by addressing hardcoded values.
| finfo = torch.finfo(dtype) | ||
| fp8_max, fp8_min = finfo.max, finfo.min | ||
| sgl_ops.sgl_per_token_group_quant_fp8(x, x_q, x_s, group_size, 1e-10, fp8_min, fp8_max) | ||
| sgl_ops.sgl_per_token_group_quant_fp8(x, x_q, x_s, group_size, 1e-10, fp8_min, fp8_max, False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The eps value is hardcoded as 1e-10 in the function call, but the function already receives an eps parameter. It's better to use the provided parameter to make the function more flexible and to respect the function's contract. Also, the False argument is a "magic value" which makes the code harder to understand without context. Please consider adding a comment explaining its purpose, or use a named argument if the sgl_ops API supports it.
sgl_ops.sgl_per_token_group_quant_fp8(x, x_q, x_s, group_size, eps, fp8_min, fp8_max, False) # eps from function param| if "llm_config" in config_json: | ||
| vocab_size = int(config_json["llm_config"]["vocab_size"]) | ||
| return vocab_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new block of code duplicates the logic for extracting and casting vocab_size that exists in lines 50-52. This code duplication can make future maintenance more difficult, as any changes would need to be applied in two places. To improve this, you could refactor the function to first select the correct configuration dictionary and then apply the vocab_size extraction logic just once.
config_json = get_config_json(model_path)
# Select the right config dictionary
if "llm_config" in config_json:
config_json = config_json["llm_config"]
# Extract vocab_size from the selected config
vocab_size = config_json["vocab_size"]
if not isinstance(vocab_size, int):
vocab_size = int(vocab_size)
return vocab_size
No description provided.