Skip to content

Comments

Fp8 lora dense kernel#35242

Open
yugong333 wants to merge 9 commits intovllm-project:mainfrom
yugong333:fp8_lora_dense_new
Open

Fp8 lora dense kernel#35242
yugong333 wants to merge 9 commits intovllm-project:mainfrom
yugong333:fp8_lora_dense_new

Conversation

@yugong333
Copy link
Contributor

@yugong333 yugong333 commented Feb 24, 2026

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Signed-off-by: Yu Gong <yu3.gong@gmail.com>
@yugong333 yugong333 requested a review from jeejeelee as a code owner February 24, 2026 22:55
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces FP8 LoRA dense kernels, including Triton implementations for shrink and expand operations with FP8 quantization support. The changes are extensive and add new files for FP8 kernel utilities and operations. My review focuses on correctness and maintainability. I've identified a critical bug in lora_shrink_fp8_op.py related to incorrect handling of return values and redundant stride calculations, which could lead to runtime errors. I've also pointed out areas with code duplication in the Triton kernels and confusing function naming, which could impact future maintenance. Addressing these points will improve the robustness and clarity of the new FP8 LoRA implementation.

_LORA_SCALE_PTR_DICT: dict[tuple[int, ...], tuple] = {}


def _get_lora_scale_ptr(lora_scale_weights: list[torch.Tensor], device: torch.device):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The function _get_lora_scale_ptr is also defined in vllm/lora/ops/triton_ops/lora_expand_fp8_op.py with a different implementation. Having two different functions with the same name can be confusing and lead to maintenance issues.

Consider renaming these functions to be more specific about what they do (e.g., _get_lora_a_scale_ptr and _get_lora_b_scale_ptr) and potentially moving them to a shared utility file if appropriate.

Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Signed-off-by: Yu Gong <yu3.gong@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant