Skip to content

[kernel] Recompilation optimization triggered by triton function parameter optimization#7645

Open
cvSoldier wants to merge 8 commits intovllm-project:releases/v0.18.0from
cvSoldier:main
Open

[kernel] Recompilation optimization triggered by triton function parameter optimization#7645
cvSoldier wants to merge 8 commits intovllm-project:releases/v0.18.0from
cvSoldier:main

Conversation

@cvSoldier
Copy link

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization

Signed-off-by: cvSoldier <610496306@qq.com>
@mergify
Copy link

mergify bot commented Mar 25, 2026

⚠️ The sha of the head commit of this PR conflicts with #7483. Mergify cannot evaluate rules on this PR. Once #7483 is merged or closed, Mergify will resume processing this PR. ⚠️

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on optimizing Triton kernel performance by strategically reducing recompilations. By adjusting the do_not_specialize lists and removing tl.constexpr annotations for specific parameters in several kernels, the system can avoid costly recompilation steps when these parameters change, leading to more efficient execution, particularly in scenarios with dynamic input characteristics.

Highlights

  • Triton Kernel Optimization: Modified several Triton kernels across various files to prevent unnecessary recompilations by explicitly listing certain parameters in the do_not_specialize decorator argument. This allows the kernels to handle dynamic input shapes or values more efficiently without triggering a new compilation for each variation.
  • Parameter Type Adjustment: Converted several kernel parameters from tl.constexpr (compile-time constants) to regular parameters. This change, in conjunction with do_not_specialize, ensures that the kernel's compiled code can be reused across different parameter values, reducing overhead and improving performance.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes several Triton kernels by adjusting their do_not_specialize lists and removing tl.constexpr from various parameters such as B, H, scale, and several stride/length arguments. This change aims to reduce kernel recompilations and enhance flexibility across different kernel implementations, including chunk_scaled_dot_kkt_fwd_kernel, fused_recurrent_gated_delta_rule_fwd_kernel, solve_tril_16x16_kernel, merge_16x16_to_32x32_inverse_kernel, merge_16x16_to_64x64_inverse_kernel, and _causal_conv1d_update_kernel_npu_tiled. A key improvement opportunity exists in split_qkv_rmsnorm_mrope_kernel where mrope_section_t, mrope_section_h, and mrope_section_w were de-constexpr'd but not added to the do_not_specialize list, potentially leading to unnecessary recompilations.

Comment on lines +51 to +53
mrope_section_t,
mrope_section_h,
mrope_section_w,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

To align with the goal of this PR to reduce kernel recompilations, mrope_section_t, mrope_section_h, and mrope_section_w should be added to the do_not_specialize list in the @triton.jit decorator.

You have correctly removed tl.constexpr from these parameters, but without adding them to do_not_specialize, Triton may still recompile the kernel when their values change.

Please update the decorator on line 28 to include them:

@triton.jit(
    do_not_specialize=[
        "num_tokens", "front_core_num", "num_tokens_each_front_core",
        "num_tokens_each_tail_core", "mrope_section_t", "mrope_section_h",
        "mrope_section_w"
    ]
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant