Skip to content

Conversation

@sychen52
Copy link
Collaborator

@sychen52 sychen52 commented Oct 7, 2025

Now, Modelopt will export its scale factor in range 448/6.
This PR goes together with NVIDIA/TensorRT-Model-Optimizer#406.

Summary by CodeRabbit

  • Refactor

    • Simplified FP4/FP8 quantization handling by removing legacy scaling adjustments and consolidating weight scale/alpha loading logic, reducing complexity and potential overhead. This may slightly change numerical results in some configurations.
  • Tests

    • Re-enabled an MoE FP4 autotune test, improving coverage and validating the updated quantization behavior in CI.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Now, Modelopt will export its scale factor in range 448/6.

Signed-off-by: Shiyang Chen <[email protected]>
@sychen52 sychen52 marked this pull request as ready for review October 13, 2025 16:55
@sychen52 sychen52 requested a review from a team as a code owner October 13, 2025 16:55
@sychen52 sychen52 requested a review from HuiGao-NV October 13, 2025 16:55
@sychen52
Copy link
Collaborator Author

/bot run

@sychen52 sychen52 requested a review from EmmaQiaoCh October 13, 2025 16:56
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

📝 Walkthrough

Walkthrough

Removed the aggregated NVFP4 weights-and-alphas loader in MoE quantization, adjusted linear FP8/NVFP4 scaling logic to drop 6x normalization and assertion, and enabled a previously skipped FP4 MoE test.

Changes

Cohort / File(s) Summary
NVFP4 MoE quantization refactor
tensorrt_llm/_torch/modules/fused_moe/quantization.py
Deleted NVFP4FusedMoEMethod.load_all_fp4_weight_scales_and_alphas and its invocation from load_quant_scales; removed associated post-processing for dst_fc31_alpha and dst_fc2_alpha. Per-expert loading paths remain.
Linear FP8/NVFP4 scaling update
tensorrt_llm/_torch/modules/linear.py
Removed weight scale normalization by 6.0 for e4m3 conversion; updated assertion to compare scales directly; weights now loaded without cross-weight 6x adjustments.
Test enablement
tests/unittest/_torch/thop/parallel/test_moe.py
Unskipped TestMoeFP4, allowing the FP4 MoE test to run.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller
  participant NVFP4FusedMoEMethod
  participant ExpertLoaders as Per-Expert Loaders

  Note over NVFP4FusedMoEMethod: New flow (aggregated loader removed)
  Caller->>NVFP4FusedMoEMethod: load_quant_scales(module, weights, expert_ids, ...)
  alt Per-expert scale loading
    NVFP4FusedMoEMethod->>ExpertLoaders: load w3_w1 scales per expert
    NVFP4FusedMoEMethod->>ExpertLoaders: load w2 scales per expert
    Note right of ExpertLoaders: No consolidated alphas/scales post-processing
  end
  NVFP4FusedMoEMethod-->>Caller: scales/aligned tensors
Loading
sequenceDiagram
  autonumber
  participant PyTest
  participant TestSuite as tests/unittest/_torch/thop/parallel/test_moe.py
  Note over TestSuite: TestMoeFP4 previously skipped
  PyTest->>TestSuite: collect tests
  Note over PyTest,TestSuite: Skip marker removed
  PyTest->>TestSuite: execute TestMoeFP4
  TestSuite-->>PyTest: report result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description does not adhere to the repository’s template because the PR title placeholder is not replaced, the “## Description” and “## Test Coverage” sections are left empty, and the initial summary text appears outside the designated sections instead of filling in the required fields. Please update the PR description to include a properly formatted title using the template, move the high-level change summary into the “## Description” section with details on the issue and solution, and list relevant tests in the “## Test Coverage” section to ensure the template is fully populated.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title includes the valid JIRA ticket [OMNIML-2336], the type tag [feat], and succinctly highlights the primary change of exporting the correct scale factor for w4a8 nvfp4 fp8, directly reflecting the PR’s main update.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4882815 and f5f0cb6.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py (0 hunks)
  • tensorrt_llm/_torch/modules/linear.py (1 hunks)
  • tests/unittest/_torch/thop/parallel/test_moe.py (0 hunks)
💤 Files with no reviewable changes (2)
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py
  • tests/unittest/_torch/thop/parallel/test_moe.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/linear.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/linear.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/linear.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

📝 Walkthrough

Walkthrough

Removed NVFP4 MoE helper for loading FP4 scales/alphas, eliminated 6.0 scaling adjustments in NVFP4FP8 weight scale handling, and re-enabled a previously skipped FP4 MoE autotune test.

Changes

Cohort / File(s) Summary of Changes
NVFP4 MoE quantization cleanup
tensorrt_llm/_torch/modules/fused_moe/quantization.py
Deleted NVFP4FusedMoEMethod.load_all_fp4_weight_scales_and_alphas, removing per-expert FP4 scale/alpha aggregation and FP8 conversion logic.
Linear NVFP4FP8 weight-scale adjustment removal
tensorrt_llm/_torch/modules/linear.py
In load_weight_scales (NVFP4FP8 path), removed divisions/multiplications by 6.0 and related assertion scaling; returns unchanged tuple without 6.0 normalization.
Tests re-enabled
tests/unittest/_torch/thop/parallel/test_moe.py
Unskipped TestMoeFP4.test_autotune by removing @pytest.mark.skip decorator.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description contains only a brief statement outside the designated Description section and leaves the Description and Test Coverage sections empty, failing to follow the required template structure. Please populate the Description section with a concise explanation of the issue and the implemented solution, add relevant test cases under Test Coverage, and remove any unused template placeholders so the PR description fully conforms to the repository template.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly references the feature change of exporting scale factors correctly for w4a8 nvfp4 fp8, matches the JIRA ticket format, and succinctly captures the main intent of the PR.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4882815 and f5f0cb6.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py (0 hunks)
  • tensorrt_llm/_torch/modules/linear.py (1 hunks)
  • tests/unittest/_torch/thop/parallel/test_moe.py (0 hunks)
💤 Files with no reviewable changes (2)
  • tests/unittest/_torch/thop/parallel/test_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/linear.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/linear.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/linear.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

remove the pytest skip.

Signed-off-by: Shiyang Chen <[email protected]>
@sychen52
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21245 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21243 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21245 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21243 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16037 completed with status: 'FAILURE'

@sychen52
Copy link
Collaborator Author

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21250 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21250 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16041 completed with status: 'FAILURE'

@sychen52
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21253 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21253 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16044 completed with status: 'FAILURE'

@sychen52
Copy link
Collaborator Author

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21263 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21263 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16053 completed with status: 'FAILURE'

@sychen52
Copy link
Collaborator Author

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21297 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21297 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16078 completed with status: 'FAILURE'

@sychen52
Copy link
Collaborator Author

/bot run --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21368 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21368 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16135 completed with status: 'SUCCESS'

Copy link
Collaborator

@QiJune QiJune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@QiJune QiJune merged commit 6a6124d into NVIDIA:main Oct 15, 2025
7 checks passed
govind-ramnarayan pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Oct 21, 2025
yufeiwu-nv pushed a commit to yufeiwu-nv/TensorRT-LLM that referenced this pull request Oct 24, 2025
…DIA#8180)

Signed-off-by: Shiyang Chen <[email protected]>
Co-authored-by: Shiyang Chen <[email protected]>
Signed-off-by: yufeiwu-nv <[email protected]>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants