Skip to content

[kernel] Recompilation optimization triggered by triton function para…#7481

Open
HarpsealCC wants to merge 1 commit intovllm-project:mainfrom
HarpsealCC:main
Open

[kernel] Recompilation optimization triggered by triton function para…#7481
HarpsealCC wants to merge 1 commit intovllm-project:mainfrom
HarpsealCC:main

Conversation

@HarpsealCC
Copy link

@HarpsealCC HarpsealCC commented Mar 19, 2026

…meter optimization,

What this PR does / why we need it?

Some parameters of Triton operators are unnecessarily modified with the "constexpr" modifier. When these parameters change, recompilation is triggered, which significantly affects the model performance. Therefore, these parameters need to be rectified.

Does this PR introduce any user-facing change?

No

How was this patch tested?

Signed-off-by: HarpSealCC 844291270@qq.com

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements an optimization strategy for Triton kernels by modifying how certain parameters are handled. By removing tl.constexpr from several kernel arguments and dynamically calculating others, the changes aim to significantly reduce the frequency of kernel recompilations, leading to improved performance and potentially faster startup times for operations involving these kernels.

Highlights

  • Triton Kernel Parameter Optimization: Removed tl.constexpr from several parameters in Triton kernels, specifically chunk_gated_delta_rule_fwd_kernel_h_blockdim64 and fused_gdn_gating_kernel. This change aims to reduce kernel recompilations by allowing these parameters to be dynamic rather than compile-time constants.
  • Dynamic COL_ITER Calculation: The COL_ITER parameter for fused_gdn_gating_kernel is no longer passed as a tl.constexpr and is now calculated dynamically inside the kernel using tl.cdiv(NUM_HEADS, BLK_HEADS).
  • Hardcoded BLK_BATCHES Value: The BLK_BATCHES value in the fused_gdn_gating_patch function has been simplified and hardcoded to 64, replacing a more complex calculation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes Triton kernels to reduce recompilation by changing several compile-time constant parameters (tl.constexpr) to runtime arguments. This is a valid approach to improve performance by avoiding repeated kernel compilations.

While reviewing the changes, I found one area of concern in vllm_ascend/ops/triton/fused_gdn_gating.py where a dynamic calculation for BLK_BATCHES has been replaced with a hardcoded value. This could potentially lead to runtime errors or suboptimal performance under certain conditions. I have added a specific comment with a suggestion to revert this change.

Additionally, as per the repository's contribution guidelines, I am providing a suggested PR title and summary to better reflect the changes and adhere to the project's formatting standards.

Suggested PR Title:

[kernel][Misc] Optimize Triton kernels to reduce recompilation

Suggested PR Summary:

### What this PR does / why we need it?

This PR optimizes Triton kernels (`chunk_gated_delta_rule_fwd_kernel_h_blockdim64` and `fused_gdn_gating_kernel`) by converting several parameters from compile-time constants (`tl.constexpr`) to runtime arguments. This change avoids frequent kernel recompilations when these parameters vary, which can reduce model loading times and improve first-token latency.

### Does this PR introduce _any_ user-facing change?

No, this is a backend performance optimization and does not introduce any user-facing changes.

### How was this patch tested?

CI passed with existing tests.

@HarpsealCC HarpsealCC force-pushed the main branch 3 times, most recently from 9e1a610 to 5e27da4 Compare March 19, 2026 12:01
@github-actions
Copy link
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

…meter optimization

Signed-off-by: l30072083 <liuchengzhuo1@h-partners.com>
@HarpsealCC HarpsealCC reopened this Mar 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:ops ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants