-
Notifications
You must be signed in to change notification settings - Fork 2k
[#10056][chore] AutoDeploy: Enable Nemo SuperV3 accuracy test #10308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThe changes introduce latent-space Mixture of Experts (MoE) support for Nemotron models with optional latent projections, adjust MLP input/output dimensions when using latent space, update model loading for embedding key compatibility, and modify test configurations to use the new SuperV3 model variant with dynamic paths. Changes
Sequence DiagramsequenceDiagram
participant Input as Input Token
participant fc1_proj as fc1_latent_proj
participant Router as Expert Router
participant Experts as MoE Experts
participant fc2_proj as fc2_latent_proj
participant Shared as Shared Experts
participant Output as Output
Input->>fc1_proj: Project to latent space
fc1_proj->>Router: Latent representation
Router->>Experts: Route to experts<br/>(latent space)
Experts->>fc2_proj: Expert outputs
fc2_proj->>Output: Project back<br/>to hidden space
Input->>Shared: Process in parallel
Shared->>Output: Shared expert outputs
Output->>Output: Combine expert<br/>+ shared outputs
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
tensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h.py (2)
310-346: LGTM! Latent-space routing logic is correct.The dimension flow through latent-space MoE is accurate:
- Project input to latent space (if configured)
- Route through experts operating in latent space
- Project back to hidden space
- Combine with shared experts (operating in hidden space)
The
hasattrcheck on line 322 is always True (attributes are set in__init__to either Linear or Identity), but this doesn't affect correctness sincenn.Identity()is a no-op. The conditional branches could be removed to simplify the code, but current implementation is acceptable.💡 Optional: Simplify by removing unnecessary hasattr check
Since the projection attributes always exist (set to Identity when not needed), you could simplify:
- # Check if this is a latent MOE (has fc1_latent_proj and fc2_latent_proj) - has_latent_proj = hasattr(self, "fc1_latent_proj") and hasattr(self, "fc2_latent_proj") - - if has_latent_proj: - # Latent MOE: project to latent space before routing - x_flat = self.fc1_latent_proj(x_flat) + # Project to latent space (no-op via Identity if not configured) + x_flat = self.fc1_latent_proj(x_flat) # Route through experts (operates in latent space if latent MOE, full space otherwise) out_flat = torch.ops.auto_deploy.torch_moe(...) - if has_latent_proj: - # Latent MOE: project back from latent space - out_flat = self.fc2_latent_proj(out_flat) + # Project back from latent space (no-op via Identity if not configured) + out_flat = self.fc2_latent_proj(out_flat)Alternatively, store a boolean flag
self.use_latent_projin__init__for clearer intent.
575-580: Consider removing thebreakstatement for robustness, though it's not a practical issue for this specific model.The
nn.Embeddinglayer at line 565 has only aweightparameter (no bias), so the state_dict will contain exactly one key matching"embedding."(e.g.,embedding.weight). Thebreakon line 580 exits correctly after renaming this single key.However, removing the
breakimproves code robustness and clarity without any downside:
- Makes the intent explicit: rename all matching keys (consistent with the loop structure)
- Removes a potential hidden bug if the model is modified to include additional embedding-related parameters
- Clarifies that the limitation is inherent to the model, not to the hook logic
def load_hook(self, state_dict, prefix, *args): # rename embedding if needed (required for SuperV3) for k in list(state_dict.keys()): if "embedding." in k: state_dict[k.replace("embedding.", "embeddings.")] = state_dict.pop(k) - break
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h.pytests/integration/defs/accuracy/test_llm_api_autodeploy.pytests/integration/test_lists/test-db/l0_dgx_h200.yml
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming:some_file.py
Python classes should use PascalCase naming:class SomeClass
Python functions and methods should use snake_case naming:def my_awesome_function():
Python local variables should use snake_case naming:my_variable = ...
Python variable names that start with a number should be prefixed with 'k':k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G':G_MY_GLOBAL = ...
Python constants should use upper snake_case naming:MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic
Files:
tests/integration/defs/accuracy/test_llm_api_autodeploy.pytensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h.py
**/*.{cpp,h,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification
Files:
tests/integration/defs/accuracy/test_llm_api_autodeploy.pytensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h.py
🧠 Learnings (7)
📓 Common learnings
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py:98-116
Timestamp: 2025-10-20T17:07:18.745Z
Learning: In NemotronH models (tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py), the gate (self.gate) returns topk_indices and topk_weights that are already in the correct shape to be passed directly to torch_ops.auto_deploy.torch_moe without needing to reshape them when hidden_states is flattened.
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/integration/test_lists/test-db/l0_dgx_h200.yml
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/integration/test_lists/test-db/l0_dgx_h200.yml
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tests/integration/defs/accuracy/test_llm_api_autodeploy.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/integration/defs/accuracy/test_llm_api_autodeploy.py
📚 Learning: 2025-10-20T17:07:18.745Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py:98-116
Timestamp: 2025-10-20T17:07:18.745Z
Learning: In NemotronH models (tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py), the gate (self.gate) returns topk_indices and topk_weights that are already in the correct shape to be passed directly to torch_ops.auto_deploy.torch_moe without needing to reshape them when hidden_states is flattened.
Applied to files:
tensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Applied to files:
tensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h.py
🧬 Code graph analysis (1)
tests/integration/defs/accuracy/test_llm_api_autodeploy.py (2)
tests/unittest/_torch/modeling/test_modeling_out_of_tree.py (1)
sampling_params(58-59)tensorrt_llm/_torch/auto_deploy/models/factory.py (2)
model(125-127)tokenizer(130-132)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (6)
tests/integration/defs/accuracy/test_llm_api_autodeploy.py (2)
268-276: LGTM! Test configuration is consistent.The test configuration properly aligns the memory requirements (180GB), device requirements (4 devices minimum), and world_size parameter (4). The substantial increase in memory requirements from the MOE test (32GB → 180GB) is appropriate given the larger model size (120B parameters vs 30B).
The comment on line 268 indicates potential for memory optimization, which could be explored in future iterations if needed.
239-239: This review comment is incorrect. The code at line 239 correctly uses thellm_models_root()function to dynamically resolve the model path, following the established pattern used for other Nemotron models in the same file (lines 100, 157-158). Thellm_models_root()function already handles path resolution with environment variable support and includes built-in assertions to validate that the model root directory exists at runtime. Model availability is an environment setup and deployment concern, not a code issue.Likely an incorrect or invalid review comment.
tests/integration/test_lists/test-db/l0_dgx_h200.yml (2)
136-137: Inconsistency: AI summary states "replaced" but both tests are present.The AI summary indicates that the Nemotron MOE test was replaced with SuperV3, but the code shows both
TestNemotronMOE::test_bf16(line 136) andTestNemotronSuperV3::test_bf16(line 137) are present. This is an addition, not a replacement.
137-137: The test class, method, and SuperV3 model are properly in place.The TestNemotronSuperV3 class and test_bf16 method exist in tests/integration/defs/accuracy/test_llm_api_autodeploy.py, and the SuperV3 model is configured with MODEL_NAME = "nvidia/Nemotron-Super-V3" and MODEL_PATH_BF16 referencing the model path. The PR addition is correct.
tensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h.py (2)
253-270: LGTM! Latent-space MLP dimension handling is correct.The conditional logic properly separates expert MLPs (operating in latent space when
moe_latent_sizeis configured) from shared/regular MLPs (operating in hidden space). The dimension flow ensures:
- Expert MLPs with latent support:
moe_latent_size → intermediate_size → moe_latent_size- Shared/regular MLPs:
hidden_size → intermediate_size → hidden_sizeThis design maintains backward compatibility for models without
moe_latent_size.
276-308: LGTM! Latent projection initialization is well-structured.The conditional initialization of latent projections is correct:
- When
moe_latent_sizeis configured: Linear layers project betweenhidden_sizeandmoe_latent_size- Otherwise:
nn.Identity()ensures a no-op, keeping the code path unifiedThe expert/shared distinction via
is_expertparameter correctly ensures experts operate in latent space while shared experts remain in hidden space.
Fix MLP dims to support latent dimension Signed-off-by: Gal Hubara Agam <[email protected]>
Signed-off-by: Gal Hubara Agam <[email protected]>
e0b486b to
645b459
Compare
Signed-off-by: Gal Hubara Agam <[email protected]>
|
/bot run |
|
PR_Github #30016 [ run ] triggered by Bot. Commit: |
|
PR_Github #30016 [ run ] completed with state
|
|
/bot run |
|
PR_Github #30051 [ run ] triggered by Bot. Commit: |
|
PR_Github #30051 [ run ] completed with state
|
Description
Test Coverage
accuracy/test_llm_api_autodeploy.py::TestNemotronSuperV3::test_bf16
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.