[fix] Fix TRTLLM MOE autotuner token bucket mismatch error#2817
[fix] Fix TRTLLM MOE autotuner token bucket mismatch error#2817jiahanc wants to merge 4 commits intoflashinfer-ai:mainfrom
Conversation
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR introduces TRT-LLM Mixture-of-Experts specific token bucket generation and mapping utilities. It adds new bucket computation functions to replace generic power-of-2 logic and updates the MoERunner tuning configuration to accept top-k and expert count parameters for proper bucket selection during autotuning. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical bug in the TRTLLM Mixture-of-Experts (MoE) autotuner. Previously, the autotuner's token bucketing mechanism was misaligned with the runtime tile selection process, causing cached tactics to be reused incorrectly and leading to potential runtime failures. The changes introduce a new, more accurate token bucketing logic that considers Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request fixes a bug in the TRTLLM MoE autotuner where there was a mismatch between token bucketing at tuning time and tile selection at runtime. The changes introduce a new bucketing logic in flashinfer/fused_moe/utils.py that aligns with the C++ implementation, and updates flashinfer/fused_moe/core.py to use this new logic. The changes look correct and effectively address the issue described. I have a suggestion to refactor the new make_trtllm_moe_bucket_mapper function to avoid redundant computation, which would improve efficiency.
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
808886d to
66eb4d1
Compare
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com> Made-with: Cursor
66eb4d1 to
6056ca6
Compare
|
/bot run |
|
[SUCCESS] Pipeline #46497474: 14/20 passed |
📌 Description
This PR fixes a TRTLLM MoE autotuning bug caused by a mismatch between tuning-time token bucketing and runtime tile selection.
Previously, tuning bucketed by raw num_tokens, while runtime dispatch selected tiles from per-expert load (num_tokens * top_k / num_experts). In some cases, this mismatch could reuse a cached tactic that did not match the runtime tile regime and cause runtime failures.
For example, with num_tokens=3500, top_k=22, num_experts=1024:
avg = (3500 * 22) / 1024 ≈ 75.2, so the runtime tile center is 128.Under the new bucketing logic:
Representative buckets per tile are computed as bucket_t =
floor((t * 1024) / 22) for t in {8, 16, 32, 64, 128, 256}, giving: 372, 744, 1489, 2978, 5957, 11915Plus the halving chain from 372: 186, 93, 46, 23, 11, 5, 2, 1
So the full bucket set is:
[1, 2, 5, 11, 23, 46, 93, 186, 372, 744, 1489, 2978, 5957, 11915]With largest-<= num_tokens mapping, 3500 maps to bucket 2978 (instead of old raw-power-of-two bucket 2048), which aligns tuning buckets with runtime tile-selection semantics.
This change makes cached tactics consistent with runtime dispatch behavior and improves stability across workloads.
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit