Skip to content

Conversation

@JennyLiu-nv
Copy link
Collaborator

@JennyLiu-nv JennyLiu-nv commented Jan 9, 2026

Summary by CodeRabbit

  • Tests
    • Extended test coverage for multiple new language models including Llama-3.3-Nemotron, DeepSeek-R1, Gemma-3, Qwen3, and Qwen2.5-VL with various quantization variants
    • Added performance and accuracy test configurations for the newly supported models

✏️ Tip: You can customize this high-level summary in your review settings.

Add QA perf and func cases for DGX-Spark

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@JennyLiu-nv
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 9, 2026

📝 Walkthrough

Walkthrough

This PR extends the integration test suite with support for new LLM models (Llama 3.3 Nemotron, DeepSeek R1, Gemma 3, Qwen3, Qwen2.5-VL) by updating model path mappings, adding parametrized test cases with dynamic memory expectations, and reorganizing performance test configurations into a YAML-based specification format.

Changes

Cohort / File(s) Change Summary
Model Path Mappings
tests/integration/defs/perf/test_perf.py, tests/integration/defs/test_e2e.py
Added 16 new model entries to MODEL_PATH_DICT (Nemotron, DeepSeek R1, Gemma 3, Qwen3, Qwen2.5-VL variants); added Gemma 3 entries to LORA_MODEL_PATH; updated test conditionals to branch on Nemotron-Nano-v2-nvfp4 for quickstart invocation; introduced dynamic memory expectation handling in test_ptp_quickstart_advanced_eagle3 based on model size
Test Suite Lists
tests/integration/test_lists/qa/llm_digits_*.txt
Added 40 new test cases to llm_digits_core.txt; rebalanced llm_digits_func.txt (+39/-20) to add newer models and remove legacy FP8/NVFP4 accuracy tests; pruned 28 performance test entries from llm_digits_perf.txt
Test Configuration
tests/integration/test_lists/qa/llm_digits_perf.yml
New YAML file defining performance test suite constraints (single GPU, Ubuntu, ARM CPU) with extensive test case enumeration for model benchmarking

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive PR description is minimal; it lacks detailed explanation of what changes were made and why, beyond referencing a JIRA ticket and Google Sheet. Expand description to explain the specific purpose of adding these test cases, what models are covered, and why these particular test configurations are needed for DGX-Spark validation.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: adding Spark QA functional and performance test cases, with proper JIRA ticket reference and test type indicator.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/defs/perf/test_perf.py (1)

95-96: Consider removing trailing slash for consistency.

Line 96 has a trailing slash ("DeepSeek-R1/DeepSeek-R1-Distill-Llama-70B/"), while most other MODEL_PATH_DICT entries (e.g., line 94: "DeepSeek-R1/DeepSeek-R1-Distill-Qwen-32B") do not. While os.path.join() typically handles this, maintaining consistency reduces potential path-handling edge cases.

🔧 Suggested fix
-    "deepseek_r1_distill_llama_70b":
-    "DeepSeek-R1/DeepSeek-R1-Distill-Llama-70B/",
+    "deepseek_r1_distill_llama_70b":
+    "DeepSeek-R1/DeepSeek-R1-Distill-Llama-70B",
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5df03b2 and fdeaac3.

📒 Files selected for processing (6)
  • tests/integration/defs/perf/test_perf.py
  • tests/integration/defs/test_e2e.py
  • tests/integration/test_lists/qa/llm_digits_core.txt
  • tests/integration/test_lists/qa/llm_digits_func.txt
  • tests/integration/test_lists/qa/llm_digits_perf.txt
  • tests/integration/test_lists/qa/llm_digits_perf.yml
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/qa/llm_digits_perf.txt
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g., some_file.py)
Python classes should use PascalCase (e.g., class SomeClass)
Python functions and methods should use snake_case (e.g., def my_awesome_function():)
Python local variables should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL)
Python constants should use upper snake_case (e.g., MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format """<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic

Files:

  • tests/integration/defs/test_e2e.py
  • tests/integration/defs/perf/test_perf.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification

Files:

  • tests/integration/defs/test_e2e.py
  • tests/integration/defs/perf/test_perf.py
🧠 Learnings (10)
📓 Common learnings
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/integration/test_lists/qa/llm_digits_core.txt
  • tests/integration/test_lists/qa/llm_digits_perf.yml
  • tests/integration/test_lists/qa/llm_digits_func.txt
  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.

Applied to files:

  • tests/integration/test_lists/qa/llm_digits_core.txt
  • tests/integration/test_lists/qa/llm_digits_perf.yml
  • tests/integration/test_lists/qa/llm_digits_func.txt
  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/integration/test_lists/qa/llm_digits_core.txt
  • tests/integration/test_lists/qa/llm_digits_perf.yml
  • tests/integration/test_lists/qa/llm_digits_func.txt
  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/integration/test_lists/qa/llm_digits_core.txt
  • tests/integration/test_lists/qa/llm_digits_func.txt
  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/integration/test_lists/qa/llm_digits_core.txt
  • tests/integration/test_lists/qa/llm_digits_perf.yml
  • tests/integration/test_lists/qa/llm_digits_func.txt
  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-08-13T11:07:11.772Z
Learnt from: Funatiq
Repo: NVIDIA/TensorRT-LLM PR: 6754
File: tests/integration/test_lists/test-db/l0_a30.yml:41-47
Timestamp: 2025-08-13T11:07:11.772Z
Learning: In TensorRT-LLM test configuration files like tests/integration/test_lists/test-db/l0_a30.yml, TIMEOUT values are specified in minutes, not seconds.

Applied to files:

  • tests/integration/test_lists/qa/llm_digits_perf.yml
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • tests/integration/defs/test_e2e.py
📚 Learning: 2025-08-06T03:47:16.802Z
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6650
File: tests/integration/test_lists/qa/llm_perf_cluster.yml:33-37
Timestamp: 2025-08-06T03:47:16.802Z
Learning: Ministral is a valid model name from Mistral AI, distinct from the regular Mistral models. In TensorRT-LLM test configurations, "ministral_8b" and "ministral_8b_fp8" are correct model identifiers and should not be changed to "mistral_8b".

Applied to files:

  • tests/integration/defs/perf/test_perf.py
📚 Learning: 2025-08-06T03:47:16.802Z
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6650
File: tests/integration/test_lists/qa/llm_perf_cluster.yml:33-37
Timestamp: 2025-08-06T03:47:16.802Z
Learning: Ministral is a valid and distinct model family from Mistral AI, separate from their regular Mistral models. Ministral 8B is specifically designed for edge computing and on-device applications, released in October 2024. In TensorRT-LLM test configurations, "ministral_8b" and "ministral_8b_fp8" are correct model identifiers and should not be changed to "mistral_8b".

Applied to files:

  • tests/integration/defs/perf/test_perf.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (15)
tests/integration/test_lists/qa/llm_digits_perf.yml (2)

1-13: Verify hardware specification: aarch64 CPU and GB10 GPU wildcards.

The condition specifies cpu: aarch64 (ARM-based architecture) and wildcard *gb10*, which appear to be DGX-Spark-specific. Confirm this is the intended target hardware and not a typo or copy-paste error.

Additionally, system_gpu_count range of 1 to 1 (exactly 1 GPU) is very restrictive. Verify this constraint is intentional, as it excludes multi-GPU testing on DGX-Spark systems.


15-47: Verify test identifiers match perf/test_perf.py parametrization and model definitions.

The 33 test cases reference models, variants (FP4/FP8/NVFP4/BF16), and specific benchmarking parameters. Confirm that:

  1. All model names (e.g., gpt_oss_20b_fp4, qwen3_8b_fp8, nemotron_nano_v2_nvfp4, etc.) are defined in test_perf.py with matching path entries in MODEL_PATH_DICT and HF_MODEL_PATH.
  2. The test parametrization string format matches the actual pytest parametrization in perf/test_perf.py::test_perf.
  3. All new models (Llama 3.3 Nemotron, Qwen3, Phi4, DeepSeek R1, Gemma 3, Qwen 2.5-VL) are properly wired into the test framework.
tests/integration/test_lists/qa/llm_digits_func.txt (1)

1-44: Verify test identifiers and model path mappings exist in test_e2e.py and accuracy test files.

The functional test list references:

  • 35 test_e2e.py parametrized tests with specific model names and HF/project paths
  • 8 accuracy tests from test_llm_api_pytorch.py and test_llm_api_pytorch_multimodal.py

Confirm that:

  1. All model/path pairs (e.g., GPT-OSS-20B-gpt_oss/gpt-oss-20b, Qwen3-30B-A3B_nvfp4_hf-Qwen3/saved_models_Qwen3-30B-A3B_nvfp4_hf) are defined in test_e2e.py model path mappings.
  2. Test methods test_ptp_quickstart_advanced, test_ptp_quickstart_multimodal_phi4mm exist with correct parametrization.
  3. Accuracy test classes (TestLlama3_1_8B, TestQwen2_5_VL_7B, TestQwen3_30B_A3B, TestPhi4MM) exist in the respective test files with corresponding test methods.
  4. Model naming conventions are consistent across files (underscores vs camelCase in test names vs path parameters).
tests/integration/test_lists/qa/llm_digits_core.txt (2)

1-40: Verify test identifiers and model mappings exist in test_e2e.py and accuracy files.

Similar to llm_digits_func.txt, this core test list references parametrized tests. Confirm:

  1. All model/path pairs are defined in test_e2e.py (e.g., Llama3.1-8B-FP8-llama-3.1-model/Llama-3.1-8B-Instruct-FP8).
  2. Test method test_ptp_quickstart_advanced_eagle3 exists and is properly parametrized for the GPT-OSS-120B Eagle3 variant (line 35).
  3. Multimodal test method test_ptp_quickstart_multimodal_phi4mm exists with correct parametrization for Phi4MM variants (lines 12-20).
  4. Accuracy test classes and methods exist in test_llm_api_pytorch.py and test_llm_api_pytorch_multimodal.py.

35-35: New test method: Verify test_ptp_quickstart_advanced_eagle3 implementation.

This core list includes a test for Eagle3 optimization (test_ptp_quickstart_advanced_eagle3), which appears to be a new or specialized test method. Ensure that:

  1. The method is properly implemented in test_e2e.py.
  2. Memory assertions or other Eagle3-specific validations are correctly configured.
  3. The test is integrated with the broader quickstart advanced test framework.
tests/integration/defs/test_e2e.py (4)

1905-1942: LGTM: Comprehensive test coverage expansion.

The new test parameters appropriately extend coverage across multiple model families (Llama, Qwen, Phi, Nemotron) with various quantization levels (BF16, FP4, FP8, NVFP4). The pytest marks correctly gate tests based on GPU architecture requirements.


1946-1946: LGTM: Consistent with new Nemotron-Nano-v2 test parameter.

Correctly extends the conditional branch to handle the newly added Nemotron-Nano-v2-nvfp4 model variant.


1974-1974: LGTM: Appropriate extension for Llama 3.3 70B variant.

Correctly applies the same max_num_tokens constraint to Llama3.3-70B as Llama3.1-70B, which is appropriate given their similar size and memory footprint.


2093-2128: LGTM: Well-designed dynamic memory expectation pattern.

The addition of dynamic expected_mem computation (lines 2103-2107) is a good improvement that makes the test more maintainable and extensible. The memory values (106.71 GiB for GPT-OSS-120B vs 25.2 GiB for Llama-3.1-8B) are reasonable given the respective model sizes, and the comments clearly document the expectations.

tests/integration/defs/perf/test_perf.py (6)

72-73: LGTM: Consistent Nemotron Super v1.5 FP8 model addition.

The new entry follows the established pattern for Nemotron model variants.


102-104: LGTM: Gemma 3 27B model variants added.

The new entries follow the established pattern for Gemma models. The mixed case in quantization suffixes (fp8 vs FP4) likely reflects the actual directory structure.


116-125: LGTM: Comprehensive Qwen3 model family additions.

The additions provide good coverage of Qwen3 model sizes (8B, 14B, 30B, 32B) with appropriate quantization variants (BF16, FP8, FP4/NVFP4). The path naming conventions are consistent with existing Qwen3 entries.


128-130: LGTM: Vision-language model variants appropriately categorized.

The Qwen2.5-VL entries are correctly placed under the multimodals/ directory, consistent with the treatment of other vision-language models in this configuration.


149-166: LGTM: Phi-4 reasoning and multimodal variants well-structured.

The additions appropriately distinguish between reasoning-focused models (lines 149-151) and multimodal models with separate image/audio configurations (lines 155-166). The quantization variants (BF16, FP8, FP4) provide comprehensive coverage for performance testing.


173-173: LGTM: Nemotron Nano v2 NVFP4 quantization variant added.

The entry appropriately complements the BF16 variant and follows the established naming convention for NVFP4 quantized models.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31188 [ run ] triggered by Bot. Commit: fdeaac3

…has the issue torch.AcceleratorError: CUDA error: an illegal instruction was encountered based on 1.2.0rc7 image

Signed-off-by: Jenny Liu <[email protected]>
@JennyLiu-nv
Copy link
Collaborator Author

JennyLiu-nv commented Jan 9, 2026

I updated the Qwen3-30B-A3B-NVFP4 path from Qwen3/saved_models_Qwen3-30B-A3B_nvfp4_hf to public huggingface Qwen3/nvidia-Qwen3-30B-A3B-NVFP4, because the old ones has the following issue, it can be passed after changing to huggingface model.
I also updated the case of RTX-pro-6000, pls confirm. @farazkh80 @pamelap-nvidia thanks.

...
  File "/usr/local/lib/python3.12/dist-packages/tensorrt_llm/_torch/pyexecutor/model_engine.py", line 3324, in forward
    self.cuda_graph_runner.capture(
  File "/usr/local/lib/python3.12/dist-packages/tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py", line 356, in capture
    with torch.cuda.graph(graph, pool=self.memory_pool):
  File "/usr/local/lib/python3.12/dist-packages/torch/cuda/graphs.py", line 242, in __enter__
    torch.cuda.synchronize()
  File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 1083, in synchronize
    return torch._C._cuda_synchronize()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.AcceleratorError: CUDA error: an illegal instruction was encountered

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31188 [ run ] completed with state SUCCESS. Commit: fdeaac3
/LLM/main/L0_MergeRequest_PR pipeline #24100 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JennyLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31247 [ run ] triggered by Bot. Commit: 2a581f0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31247 [ run ] completed with state SUCCESS. Commit: 2a581f0
/LLM/main/L0_MergeRequest_PR pipeline #24148 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@farazkh80
Copy link
Collaborator

LGTM, a couple tiny comments. Thanks @JennyLiu-nv

@JennyLiu-nv
Copy link
Collaborator Author

JennyLiu-nv commented Jan 12, 2026

LGTM, a couple tiny comments. Thanks @JennyLiu-nv

Thanks Faraz for all these comments, exclude the following one, all comments solved.
This will use new PR to add, it may need take some time to implement.

@JennyLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31442 [ run ] triggered by Bot. Commit: 3a1c589

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31442 [ run ] completed with state SUCCESS. Commit: 3a1c589
/LLM/main/L0_MergeRequest_PR pipeline #24303 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JennyLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31479 [ run ] triggered by Bot. Commit: 3a1c589

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31479 [ run ] completed with state SUCCESS. Commit: 3a1c589
/LLM/main/L0_MergeRequest_PR pipeline #24335 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JennyLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31497 [ run ] triggered by Bot. Commit: 747ae7d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31497 [ run ] completed with state SUCCESS. Commit: 747ae7d
/LLM/main/L0_MergeRequest_PR pipeline #24349 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JennyLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31522 [ run ] triggered by Bot. Commit: 747ae7d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31522 [ run ] completed with state SUCCESS. Commit: 747ae7d
/LLM/main/L0_MergeRequest_PR pipeline #24370 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@farazkh80
Copy link
Collaborator

/bot run

Copy link
Collaborator

@farazkh80 farazkh80 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes, LGTM.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31592 [ run ] triggered by Bot. Commit: 747ae7d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31592 [ run ] completed with state SUCCESS. Commit: 747ae7d
/LLM/main/L0_MergeRequest_PR pipeline #24431 completed with status: 'SUCCESS'

@JennyLiu-nv
Copy link
Collaborator Author

/bot run

@JennyLiu-nv JennyLiu-nv requested a review from ruodil January 13, 2026 00:29
@tensorrt-cicd
Copy link
Collaborator

PR_Github #31637 [ run ] triggered by Bot. Commit: 9c75c05

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31637 [ run ] completed with state SUCCESS. Commit: 9c75c05
/LLM/main/L0_MergeRequest_PR pipeline #24469 completed with status: 'SUCCESS'

@JennyLiu-nv
Copy link
Collaborator Author

JennyLiu-nv commented Jan 13, 2026

@ruodil please help to merge, since all the tested reviewed by @farazkh80 and @pamelap-nvidia , ci also passed. thanks a lot

@ruodil ruodil merged commit 2967d29 into NVIDIA:main Jan 13, 2026
5 checks passed
videodanchik pushed a commit to videodanchik/TensorRT-LLM that referenced this pull request Jan 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants