Skip to content

[fix] Fix TRTLLM MOE autotuner token bucket mismatch error#2817

Draft
jiahanc wants to merge 4 commits intoflashinfer-ai:mainfrom
jiahanc:fixTrtllmMOEAutotuner
Draft

[fix] Fix TRTLLM MOE autotuner token bucket mismatch error#2817
jiahanc wants to merge 4 commits intoflashinfer-ai:mainfrom
jiahanc:fixTrtllmMOEAutotuner

Conversation

@jiahanc
Copy link
Collaborator

@jiahanc jiahanc commented Mar 19, 2026

📌 Description

This PR fixes a TRTLLM MoE autotuning bug caused by a mismatch between tuning-time token bucketing and runtime tile selection.
Previously, tuning bucketed by raw num_tokens, while runtime dispatch selected tiles from per-expert load (num_tokens * top_k / num_experts). In some cases, this mismatch could reuse a cached tactic that did not match the runtime tile regime and cause runtime failures.

For example, with num_tokens=3500, top_k=22, num_experts=1024:
avg = (3500 * 22) / 1024 ≈ 75.2, so the runtime tile center is 128.

Under the new bucketing logic:

Representative buckets per tile are computed as bucket_t = floor((t * 1024) / 22) for t in {8, 16, 32, 64, 128, 256}, giving: 372, 744, 1489, 2978, 5957, 11915
Plus the halving chain from 372: 186, 93, 46, 23, 11, 5, 2, 1
So the full bucket set is:
[1, 2, 5, 11, 23, 46, 93, 186, 372, 744, 1489, 2978, 5957, 11915]

With largest-<= num_tokens mapping, 3500 maps to bucket 2978 (instead of old raw-power-of-two bucket 2048), which aligns tuning buckets with runtime tile-selection semantics.
This change makes cached tactics consistent with runtime dispatch behavior and improves stability across workloads.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • New Features
    • Enhanced Mixture of Experts (MoE) auto-tuning for TRT-LLM backend with optimized token bucket allocation and expert mapping strategies.

Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2c38938d-c5cc-449f-b7d1-c4e82facc70e

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR introduces TRT-LLM Mixture-of-Experts specific token bucket generation and mapping utilities. It adds new bucket computation functions to replace generic power-of-2 logic and updates the MoERunner tuning configuration to accept top-k and expert count parameters for proper bucket selection during autotuning.

Changes

Cohort / File(s) Summary
Bucket Generation Utilities
flashinfer/fused_moe/utils.py
Added constants TRTLLM_MOE_MIN_TILE_N and TRTLLM_MOE_MAX_TILE_N for MoE tile sizing bounds. Introduced get_trtllm_moe_num_tokens_buckets() function to compute deduped, sorted token bucket sizes based on average tokens per expert and power-of-2 clamping, plus geometric ranges. Added make_trtllm_moe_bucket_mapper() to create a closure that maps input token counts to appropriate bucket values using binary search.
MoE Runner Configuration
flashinfer/fused_moe/core.py
Updated MoERunner.refine_tuning_config() to accept top_k and num_local_experts parameters. Replaced generic power-of-2 bucket logic with TRT-LLM specific bucket/mapping derived from the new utility functions. Modified all autotuning call sites in FP8, BF16, FP4, and FP8 op wrappers to pass these new parameters to the refinement method.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • bkryu
  • yzh119

Poem

🐰 The buckets now align so true,
With expert counts and top-k through,
No power-of-two shall stand alone,
TRT-LLM's wisdom now full-grown! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 15.38% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: fixing a TRTLLM MoE autotuner token bucket mismatch error. It is concise, specific, and directly relates to the core issue addressed in the PR.
Description check ✅ Passed The PR description includes a detailed explanation of the bug, the fix, and examples demonstrating the change. All required template sections are present and completed: the description explains what and why, checklist items are marked complete, and pre-commit/test verifications are confirmed.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jiahanc jiahanc marked this pull request as draft March 19, 2026 06:03
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the TRTLLM Mixture-of-Experts (MoE) autotuner. Previously, the autotuner's token bucketing mechanism was misaligned with the runtime tile selection process, causing cached tactics to be reused incorrectly and leading to potential runtime failures. The changes introduce a new, more accurate token bucketing logic that considers top_k and num_local_experts, ensuring that the autotuner's decisions are consistent with how tiles are selected during actual execution, thereby improving stability and reliability across different workloads.

Highlights

  • New TRTLLM MoE Autotuner Bucketing Logic: Implemented a new token bucketing strategy for the TRTLLM MoE autotuner that aligns tuning-time token bucketing with runtime tile selection, resolving a previous mismatch that could lead to runtime failures.
  • Updated MoERunner.refine_tuning_config: The refine_tuning_config method now accepts top_k and num_local_experts parameters, which are essential for the new, more accurate bucketing logic.
  • Integration Across MoE Operations: Various TRTLLM MoE operations (trtllm_bf16_moe_op, trtllm_fp8_per_tensor_scale_moe_op, etc.) have been updated to pass top_k and num_local_experts to the refine_tuning_config method, ensuring consistent autotuning behavior across different workloads.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a bug in the TRTLLM MoE autotuner where there was a mismatch between token bucketing at tuning time and tile selection at runtime. The changes introduce a new bucketing logic in flashinfer/fused_moe/utils.py that aligns with the C++ implementation, and updates flashinfer/fused_moe/core.py to use this new logic. The changes look correct and effectively address the issue described. I have a suggestion to refactor the new make_trtllm_moe_bucket_mapper function to avoid redundant computation, which would improve efficiency.

Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
@jiahanc jiahanc force-pushed the fixTrtllmMOEAutotuner branch from 808886d to 66eb4d1 Compare March 19, 2026 06:16
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Made-with: Cursor
@jiahanc jiahanc force-pushed the fixTrtllmMOEAutotuner branch from 66eb4d1 to 6056ca6 Compare March 19, 2026 06:26
@jiahanc
Copy link
Collaborator Author

jiahanc commented Mar 19, 2026

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !430 has been created, and the CI pipeline #46497474 is currently running. I'll report back once the pipeline job completes.

@flashinfer-bot
Copy link
Collaborator

[SUCCESS] Pipeline #46497474: 14/20 passed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants