[kernel] Recompilation optimization triggered by triton function parameter optimization#7645
[kernel] Recompilation optimization triggered by triton function parameter optimization#7645cvSoldier wants to merge 8 commits intovllm-project:releases/v0.18.0from
Conversation
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
…meter optimization Signed-off-by: cvSoldier <610496306@qq.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on optimizing Triton kernel performance by strategically reducing recompilations. By adjusting the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request optimizes several Triton kernels by adjusting their do_not_specialize lists and removing tl.constexpr from various parameters such as B, H, scale, and several stride/length arguments. This change aims to reduce kernel recompilations and enhance flexibility across different kernel implementations, including chunk_scaled_dot_kkt_fwd_kernel, fused_recurrent_gated_delta_rule_fwd_kernel, solve_tril_16x16_kernel, merge_16x16_to_32x32_inverse_kernel, merge_16x16_to_64x64_inverse_kernel, and _causal_conv1d_update_kernel_npu_tiled. A key improvement opportunity exists in split_qkv_rmsnorm_mrope_kernel where mrope_section_t, mrope_section_h, and mrope_section_w were de-constexpr'd but not added to the do_not_specialize list, potentially leading to unnecessary recompilations.
| mrope_section_t, | ||
| mrope_section_h, | ||
| mrope_section_w, |
There was a problem hiding this comment.
To align with the goal of this PR to reduce kernel recompilations, mrope_section_t, mrope_section_h, and mrope_section_w should be added to the do_not_specialize list in the @triton.jit decorator.
You have correctly removed tl.constexpr from these parameters, but without adding them to do_not_specialize, Triton may still recompile the kernel when their values change.
Please update the decorator on line 28 to include them:
@triton.jit(
do_not_specialize=[
"num_tokens", "front_core_num", "num_tokens_each_front_core",
"num_tokens_each_tail_core", "mrope_section_t", "mrope_section_h",
"mrope_section_w"
]
)
What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?