Skip to content

[#10780][feat] AutoDeploy: Support per-expert scales in FP8 and NVFP4 MoE#11322

Merged
galagam merged 8 commits intoNVIDIA:mainfrom
nv-auto-deploy:gagam/handle-non-identical-moe-scales-v2
Feb 9, 2026
Merged

[#10780][feat] AutoDeploy: Support per-expert scales in FP8 and NVFP4 MoE#11322
galagam merged 8 commits intoNVIDIA:mainfrom
nv-auto-deploy:gagam/handle-non-identical-moe-scales-v2

Conversation

@galagam
Copy link
Collaborator

@galagam galagam commented Feb 5, 2026

Description

FP8 and NVFP4 MoE kernels require scalar input scales, but models may have
different input scales per expert. Previously, the autodeploy code used
the first expert's scale (input_scale[0]), which could cause accuracy
issues when scales differ significantly.

Changes:

  • FP8: Use max(input_scale) for FC1 and FC2 input quantization
  • NVFP4: Use min(input_scale) for FC1 input quantization and recompute alpha
    (min is used because NVFP4 scale format is inverted: scale = 2688/amax)
  • Precompute global input scales at transform time
  • Add config option allow_different_input_scales to FuseFP8MoeConfig and FuseNVFP4MoeConfig:
    • False (default): Assert all experts have identical scales, fail if not
    • True: Allow different scales with a warning
  • Update kernel signatures to take precomputed scalar scales instead of tensor scales
  • Add unit tests for the new config option

This PR is replacing #10814 , adding nvfp4 support and removing superv3-specific configurations.

Test Coverage

tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_moe_fusion.py::test_fp8_moe_different_input_scales
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_moe_fusion.py::test_nvfp4_moe_different_input_scales -v

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added allow_different_input_scales configuration option for FP8 and FP4 MoE fusion, enabling flexible per-expert input scale handling during quantization.
  • Bug Fixes

    • Corrected activation scale tensor shapes in MoE quantization paths for proper alignment and computation.
  • Tests

    • Expanded test coverage for MoE quantization with varying input scale configurations.

@galagam galagam requested a review from a team as a code owner February 5, 2026 18:49
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Walkthrough

The changes refactor FP8 and NVFP4 MoE quantization to handle per-expert input scales flexibly through a new allow_different_input_scales configuration option. Input scales are now treated as precomputed max values instead of per-element tensors, with corresponding API updates across custom kernels and transform logic.

Changes

Cohort / File(s) Summary
Configuration
tensorrt_llm/_torch/auto_deploy/config/default.yaml
Added allow_different_input_scales: false option to fuse_fp8_moe and fuse_nvfp4_moe in post_load_fusion stage.
Custom Operations
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/triton_moe.py, tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/trtllm_moe.py
Updated scale handling to use precomputed max input scales (shape [1] tensors) instead of per-element scales; renamed triton_quant_fp8_moe to triton_quant_fp8_moe_fake with adjusted scale extraction logic; replaced per-element activation scales with scalar equivalents in FP8 quantization path.
Transform Library
tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py
Introduced FuseFP8MoeConfig and FuseNVFP4MoeConfig classes with allow_different_input_scales field; extended _stack_fp8_moe_weights and _stack_nvfp4_moe_weights to accept allow_different_input_scales parameter; added conditional validation and max/min scale logic for handling different per-expert input scales; extended argument extraction to include w3_input_scale for NVFP4 path.
Integration Tests
tests/integration/test_lists/test-db/l0_dgx_b200.yml, tests/integration/test_lists/test-db/l0_dgx_h100.yml
Removed test case accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_fp8[8] from AutoDeploy pre-merge blocks.
Unit Tests - Custom Ops
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py, tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/triton_kernels/test_triton_moe.py
Adjusted activation scale tensor shapes from unsqueeze(0) to direct reshape(1) calls; replaced per-expert input scale tensors with precomputed max scales; updated Triton FP8 MoE invocation to use [1]-shaped max scale tensors.
Unit Tests - Transformations
tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_moe_fusion.py
Added comprehensive input-scale testing infrastructure: new test modules FP8MoEModuleForInputScaleTest and NVFP4MoEModuleForInputScaleTest; new parameterized test functions test_fp8_moe_different_input_scales and test_nvfp4_moe_different_input_scales validating behavior with identical and different per-expert scales.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • kmk142789
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main feature: AutoDeploy now supports per-expert scales in FP8 and NVFP4 MoE kernels.
Description check ✅ Passed The PR description includes all required sections: clear explanation of the problem and solution, specific test coverage examples, and a completed PR checklist.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
tensorrt_llm/_torch/auto_deploy/custom_ops/fused_moe/triton_moe.py (1)

1-3: ⚠️ Potential issue | 🟠 Major

Add NVIDIA copyright header.

This .py source file is missing the required NVIDIA copyright header with the latest modification year.

📝 Suggested header
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 """
 Triton implementation of the Fused MOE ops. Inspired by vLLM's triton MOE implementation.
 """
As per coding guidelines: "All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification".
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/test_trtllm_moe.py (1)

1-4: ⚠️ Potential issue | 🟠 Major

Add NVIDIA copyright header.

This .py source file is missing the required NVIDIA copyright header with the latest modification year.

📝 Suggested header
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 """
 This file contains test functions copied from:
 https://github.com/flashinfer-ai/flashinfer/blob/main/tests/moe/test_trtllm_cutlass_fused_moe.py
 """
As per coding guidelines: "All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification".
tests/unittest/_torch/auto_deploy/unit/singlegpu/custom_ops/triton_kernels/test_triton_moe.py (1)

1-4: ⚠️ Potential issue | 🟠 Major

Add NVIDIA copyright header.

This .py source file is missing the required NVIDIA copyright header with the latest modification year.

📝 Suggested header
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 import pytest
 import torch
 from utils.util import skip_pre_hopper
As per coding guidelines: "All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification".
tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py (1)

1-18: ⚠️ Potential issue | 🟠 Major

Add NVIDIA copyright header.

This .py source file is missing the required NVIDIA copyright header with the latest modification year.

📝 Suggested header
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+
 import math
 from collections import defaultdict
 from functools import partial
As per coding guidelines: "All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification".
🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/auto_deploy/transform/library/fused_moe.py`:
- Around line 1742-1799: Replace the two "assert False, (message)" checks inside
the FC1 input-scale branches with explicit exceptions: where the code tests "if
not allow_different_input_scales:" (the branches that follow the w1/w3 and the
w1-only checks using w1_input_scale_stacked and w3_input_scale_stacked), raise a
descriptive exception (e.g., raise ValueError("FC1 input scales differ across
experts ... Set allow_different_input_scales=True to allow different scales
(uses min).")) instead of using assert so the validation cannot be bypassed
under -O; keep the original message text and leave the surrounding logic
(ad_logger.warning_once, fc1_act_scale, fc1_alpha_stacked computation)
unchanged.

In
`@tests/unittest/_torch/auto_deploy/unit/singlegpu/transformations/library/test_moe_fusion.py`:
- Line 11: The import of tensorrt_llm._torch.auto_deploy.custom_ops is intended
for side effects but the trailing comment marker `# noqa: F401` is unnecessary;
remove the `# noqa: F401` suffix from the import statement (the import line
referencing tensorrt_llm._torch.auto_deploy.custom_ops in test_moe_fusion.py) so
the import remains for side effects without the unused noqa marker.
- Around line 978-981: The NVFP4 alpha calculation is inverted: instead of
multiplying inp_scale by w1_scale_2/w2_scale_2, compute the reciprocal per the
comment; replace the assignments that append (inp_scale *
w1_scale_2).to(torch.float32) and (inp_scale * w2_scale_2).to(torch.float32)
with the reciprocal form 1.0 / (inp_scale * w1_scale_2) and 1.0 / (inp_scale *
w2_scale_2) respectively (cast to torch.float32) so w1_alpha and w2_alpha match
the documented 1/(input_scale*weight_scale) formula.

@galagam
Copy link
Collaborator Author

galagam commented Feb 5, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35003 [ run ] triggered by Bot. Commit: 8ee2b6e

@galagam
Copy link
Collaborator Author

galagam commented Feb 5, 2026

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35005 [ kill ] triggered by Bot. Commit: 239af72

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35005 [ kill ] completed with state SUCCESS. Commit: 239af72
Successfully killed previous jobs for commit 239af72

@galagam
Copy link
Collaborator Author

galagam commented Feb 5, 2026

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35011 [ run ] triggered by Bot. Commit: 239af72

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35011 [ run ] completed with state FAILURE. Commit: 239af72
/LLM/main/L0_MergeRequest_PR pipeline #27014 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@galagam
Copy link
Collaborator Author

galagam commented Feb 6, 2026

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35059 [ run ] triggered by Bot. Commit: 239af72

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35059 [ run ] completed with state SUCCESS. Commit: 239af72
/LLM/main/L0_MergeRequest_PR pipeline #27058 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@nvchenghaoz
Copy link
Collaborator

I am wondering if we want to delete this - allow_different_input_scales by setting this value to True for all cases. And return a warning if the input scales are different.

Copy link
Collaborator

@bmarimuthu-nv bmarimuthu-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, Thanks @galagam!

@galagam
Copy link
Collaborator Author

galagam commented Feb 8, 2026

I am wondering if we want to delete this - allow_different_input_scales by setting this value to True for all cases. And return a warning if the input scales are different.

It can degrade accuracy, and ModelOpt-generated models are always same-scale, which is why I went with the more conservative path of opt-in.

…n FP8 MoE

FP8 MoE kernel requires a scalar input scales, but models may have
different input scales per expert. Previously, the autodeploy code
used the first expert's scale (input_scale[0]), which could cause accuracy
issues when scales differ significantly.

Changes:
- Use max(input_scale) for FC1 and FC2 input quantization, matching TRT-LLM manual backend
- Precompute max input scales at transform time for both trtllm and triton backends
- Add config option to FuseFP8MoeConfig:
  - False (default): Assert all experts have identical scales, fail if not
  - True: Allow different scales with a warning, use max() for quantization
- Update kernel signatures to take precomputed scalar scales instead of tensor scales
- Add unit test for the new config option

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
@galagam galagam force-pushed the gagam/handle-non-identical-moe-scales-v2 branch from 239af72 to 4c74806 Compare February 8, 2026 07:50
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
@galagam
Copy link
Collaborator Author

galagam commented Feb 8, 2026

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" --disable-fail-fast

@galagam
Copy link
Collaborator Author

galagam commented Feb 8, 2026

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" --disable-fail-fast

@galagam
Copy link
Collaborator Author

galagam commented Feb 8, 2026

/bot run

Signed-off-by: Gal Hubara-Agam <96368689+galagam@users.noreply.github.com>
@galagam
Copy link
Collaborator Author

galagam commented Feb 8, 2026

/bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35229 [ run ] triggered by Bot. Commit: 73562fc

@galagam galagam enabled auto-merge (squash) February 8, 2026 13:45
@tensorrt-cicd
Copy link
Collaborator

PR_Github #35229 [ run ] completed with state SUCCESS. Commit: 73562fc
/LLM/main/L0_MergeRequest_PR pipeline #27212 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@galagam
Copy link
Collaborator Author

galagam commented Feb 8, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35245 [ run ] triggered by Bot. Commit: 73562fc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35245 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 6 PM PST on 2/8.

@galagam
Copy link
Collaborator Author

galagam commented Feb 9, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35281 [ run ] triggered by Bot. Commit: 73562fc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35281 [ run ] completed with state SUCCESS. Commit: 73562fc
/LLM/main/L0_MergeRequest_PR pipeline #27244 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@galagam
Copy link
Collaborator Author

galagam commented Feb 9, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35345 [ run ] triggered by Bot. Commit: 73562fc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #35345 [ run ] completed with state SUCCESS. Commit: 73562fc
/LLM/main/L0_MergeRequest_PR pipeline #27297 completed with status: 'SUCCESS'

@galagam galagam merged commit 2b60cc1 into NVIDIA:main Feb 9, 2026
5 checks passed
inciaf pushed a commit to inciaf/trtllm-energy-monitoring that referenced this pull request Feb 18, 2026
… NVFP4 MoE (NVIDIA#11322)

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Gal Hubara-Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Ahmet Inci <ainci@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants