Skip to content

Conversation

@chzblych
Copy link
Collaborator

@chzblych chzblych commented Dec 31, 2025

Summary by CodeRabbit

  • Improvements

    • Enhanced test execution reliability through improved timeout handling and retry cleanup mechanisms.
    • Strengthened job synchronization and locking for parallel test execution environments.
    • Increased wait intervals for configuration file operations to improve stability.
  • Diagnostics

    • Added enhanced debugging output when pytest encounters usage errors.
    • Improved logging visibility across test execution paths for better troubleshooting.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@chzblych chzblych requested review from a team as code owners December 31, 2025 09:53
@chzblych
Copy link
Collaborator Author

/bot run --disable-fail-fast

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 31, 2025

📝 Walkthrough

Walkthrough

Changes span Jenkins Groovy test scripts and Slurm shell scripts, introducing centralized timeout handling, improved retry cleanup via keep-list approach, job-specific locking for parallel Slurm installs, extended wait intervals, debug diagnostics for pytest failures, and minor logging adjustments.

Changes

Cohort / File(s) Summary
Jenkins Test Configuration
jenkins/L0_Test.groovy
Centralized partition timeout computation in runLLMTestlistWithAgent; added cluster selection logging in runLLMTestlistWithSbatch and Slurm path; implemented selective file cleanup during retry using keep-list approach (filesToKeepWhenRetry) instead of individual removals; extended Slurm retry wait from 60s to 120s; replaced exact GPU type string checks (gb10x) with substring containment checks.
Slurm Install Mechanism
jenkins/scripts/slurm_install.sh
Introduced job-specific lock file (based on SLURM_JOB_ID and SLURM_NODEID) to coordinate parallel task execution; removed static install_lock.lock usage; added primary task cleanup (SLURM_LOCALID == 0); replaced fixed-interval polling (5s sleep) with dynamic lock checks using longer wait intervals (10s sleep).
Slurm Runtime
jenkins/scripts/slurm_run.sh
Increased coverage config file save wait timeout from 10s to 30s in non-root branch; added conditional DEBUG block (triggered on pytest exit code 4) to log directory state, conftest.py/pytest.ini checksums, and conftest.py importability verification.
Test Configuration
tests/integration/defs/conftest.py
Minor log message formatting: removed "Warning: " prefix and newline characters in get_gpu_memory_wo_pynvml() warning output.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is essentially the template with placeholder comments and no actual content filled in; the Description and Test Coverage sections are empty, and the checklist is not meaningfully completed. Provide a clear description explaining what issues these CI tweaks address, why the changes are needed, and list the relevant tests or validation methods that safeguard the changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title '[None][ci] Some tweaks for the CI pipeline' clearly summarizes the main change—general CI pipeline improvements—and is directly related to the changeset's focus on Jenkins scripts and CI-related modifications.
✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d944430 and 38d3976.

📒 Files selected for processing (4)
  • jenkins/L0_Test.groovy
  • jenkins/scripts/slurm_install.sh
  • jenkins/scripts/slurm_run.sh
  • tests/integration/defs/conftest.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming: some_file.py
Python classes should use PascalCase naming: class SomeClass
Python functions and methods should use snake_case naming: def my_awesome_function():
Python local variables should use snake_case naming: my_variable = ...
Python variable names that start with a number should be prefixed with 'k': k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G': G_MY_GLOBAL = ...
Python constants should use upper snake_case naming: MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic

Files:

  • tests/integration/defs/conftest.py
**/*.{cpp,h,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification

Files:

  • tests/integration/defs/conftest.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Learnt from: yuanjingx87
Repo: NVIDIA/TensorRT-LLM PR: 7176
File: jenkins/L0_Test.groovy:361-389
Timestamp: 2025-08-22T19:08:10.822Z
Learning: In Slurm job monitoring scripts, when jobs have built-in timeouts configured (via --time parameter or partition/system timeouts), an additional timeout mechanism in the monitoring loop is typically unnecessary. When a Slurm job times out, it gets terminated and removed from the active queue, causing `squeue -j $jobId` to return non-zero and break monitoring loops naturally. The job's final status can then be checked via `sacct` to determine if it failed due to timeout.
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7785
File: tests/integration/defs/perf/utils.py:321-333
Timestamp: 2025-09-17T06:01:01.836Z
Learning: In test infrastructure code for disaggregated serving tests, prefer logging errors and continuing execution rather than raising exceptions on timeout, to avoid disrupting test cleanup and causing cascading failures.
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • jenkins/scripts/slurm_run.sh
📚 Learning: 2025-08-22T19:08:10.822Z
Learnt from: yuanjingx87
Repo: NVIDIA/TensorRT-LLM PR: 7176
File: jenkins/L0_Test.groovy:361-389
Timestamp: 2025-08-22T19:08:10.822Z
Learning: In Slurm job monitoring scripts, when jobs have built-in timeouts configured (via --time parameter or partition/system timeouts), an additional timeout mechanism in the monitoring loop is typically unnecessary. When a Slurm job times out, it gets terminated and removed from the active queue, causing `squeue -j $jobId` to return non-zero and break monitoring loops naturally. The job's final status can then be checked via `sacct` to determine if it failed due to timeout.

Applied to files:

  • jenkins/L0_Test.groovy
🪛 Ruff (0.14.10)
tests/integration/defs/conftest.py

2692-2692: f-string without any placeholders

Remove extraneous f prefix

(F541)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
jenkins/scripts/slurm_run.sh (2)

64-69: Non-root coverage-config wait extension is safe

Increasing the non-root sleep to 30 seconds after rank 0 writes the coverage config just adds margin to avoid races; behavior is otherwise unchanged and looks fine.


111-128: Pytest exit-code‑4 debug block is well-scoped

The new diagnostics for pytest_exit_code == 4 (listing directories, hashing conftest.py/pytest.ini, and checking importability) are narrowly gated and non-invasive for other exit codes. This should help triage the intermittent “unrecognized arguments” issue without affecting normal runs.

jenkins/scripts/slurm_install.sh (1)

15-22: Job/node‑scoped install lock improves Slurm concurrency handling

Using install_lock_job_${SLURM_JOB_ID}_node_${SLURM_NODEID}.lock and clearing any stale instance before the primary’s install sequence makes the lock both job‑ and node‑specific, avoiding cross‑job contention while preserving the simple “primary installs, others wait” contract. The 10s polling interval for followers is reasonable here.

Also applies to: 37-42

jenkins/L0_Test.groovy (5)

698-701: Centralized Slurm partition timeout for agent path looks correct

Logging "${stageName} Slurm partition timeout: ${partition.time}" and computing partitionTimeout = partition?.time ?: SlurmConfig.DEFAULT_TIMEOUT_SHORT once, then passing it into runInDockerOnNodeMultiStage / runInEnrootOnNode, keeps Jenkins timeouts aligned with the partition’s configured limit while retaining the existing 10‑minute safety margin in those helpers.


940-944: Added “Selected Cluster” log in sbatch path improves observability

Printing Selected Cluster: ${cluster.name} during the “[stageName] Initializing Test” stage mirrors the agent path logging and makes it easier to correlate sbatch jobs with their backing Slurm cluster in logs.


1515-1526: Keep‑list based workspace cleanup on Slurm retries is a solid robustness improvement

Defining filesToKeepWhenRetry and generating findKeepWhenRetryArgs to protect the run/install/bash-utils scripts, test/waives lists, and coverage config, then using:

find "${jobWorkspace}" -maxdepth 1 -mindepth 1 ${findKeepWhenRetryArgs} -exec rm -rf {} +

after cancelling any previous job (with a 120s grace window) gives you a clean job workspace between retries without risking deletion of core control files. The basename‑based matching and -maxdepth 1 guard make this safe even if additional subdirs exist under jobWorkspace.

Also applies to: 1231-1242


1678-1697: gb10x / DGX Spark handling and no‑driver GPU detection logic look consistent

Special‑casing type.contains("gb10x") to:

  • switch targetCloud to "nvks-sparks-cloud",
  • cap memory to 64Gi with appropriate tolerations, and
  • set NVIDIA_VISIBLE_DEVICES / NVIDIA_DRIVER_CAPABILITIES

matches the DGX Spark requirements, while the updated REQUIRED_NO_DRIVER_TYPES.any { type.contains(it) } plus the gb10x branch under that block correctly cover variants like dgx-h200-x4 and gb10x types without relying on exact string equality.


3251-3287: SBSA (GH200/GB10/GB200) test configs and Slurm wiring are coherent for AArch64

The new SBSA maps:

  • SBSATestConfigs (e.g., "GH200-TensorRT-Post-Merge-1", "GB10-PyTorch-1"),
  • SBSASlurmTestConfigs (single‑node SBSA Slurm tests), and
  • multiNodesSBSAConfigs (GB200 multi‑node variants, including perf‑sanity stages),

plus the AARCH64_TRIPLE‑guarded wiring that:

  • rebuilds parallelJobs from SBSATestConfigs using createKubernetesPodConfig(…, arch="arm64"),
  • adds SBSA Slurm jobs via runLLMTestlistOnSlurm with appropriate gpuCount / nodeCount, and
  • keeps fullSet in sync for stage-name validation,

all fit the existing pattern used for x86 stages and cleanly separate SBSA behavior behind env.targetArch == AARCH64_TRIPLE. I don’t see gaps in how these new stages are surfaced or filtered.

Also applies to: 3290-3319


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/defs/conftest.py (1)

2688-2693: Remove unnecessary f-string prefix to satisfy Ruff (F541)

Line [2692] uses an f-string without any placeholders, which Ruff flags as F541. Drop the f prefix to avoid the lint error.

Proposed fix
-    logger.warning(
-        f"pynvml not available, using fallback commands for memory monitoring")
+    logger.warning(
+        "pynvml not available, using fallback commands for memory monitoring")
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d944430 and 38d3976.

📒 Files selected for processing (4)
  • jenkins/L0_Test.groovy
  • jenkins/scripts/slurm_install.sh
  • jenkins/scripts/slurm_run.sh
  • tests/integration/defs/conftest.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming: some_file.py
Python classes should use PascalCase naming: class SomeClass
Python functions and methods should use snake_case naming: def my_awesome_function():
Python local variables should use snake_case naming: my_variable = ...
Python variable names that start with a number should be prefixed with 'k': k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G': G_MY_GLOBAL = ...
Python constants should use upper snake_case naming: MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic

Files:

  • tests/integration/defs/conftest.py
**/*.{cpp,h,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification

Files:

  • tests/integration/defs/conftest.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Learnt from: yuanjingx87
Repo: NVIDIA/TensorRT-LLM PR: 7176
File: jenkins/L0_Test.groovy:361-389
Timestamp: 2025-08-22T19:08:10.822Z
Learning: In Slurm job monitoring scripts, when jobs have built-in timeouts configured (via --time parameter or partition/system timeouts), an additional timeout mechanism in the monitoring loop is typically unnecessary. When a Slurm job times out, it gets terminated and removed from the active queue, causing `squeue -j $jobId` to return non-zero and break monitoring loops naturally. The job's final status can then be checked via `sacct` to determine if it failed due to timeout.
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7785
File: tests/integration/defs/perf/utils.py:321-333
Timestamp: 2025-09-17T06:01:01.836Z
Learning: In test infrastructure code for disaggregated serving tests, prefer logging errors and continuing execution rather than raising exceptions on timeout, to avoid disrupting test cleanup and causing cascading failures.
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • jenkins/scripts/slurm_run.sh
📚 Learning: 2025-08-22T19:08:10.822Z
Learnt from: yuanjingx87
Repo: NVIDIA/TensorRT-LLM PR: 7176
File: jenkins/L0_Test.groovy:361-389
Timestamp: 2025-08-22T19:08:10.822Z
Learning: In Slurm job monitoring scripts, when jobs have built-in timeouts configured (via --time parameter or partition/system timeouts), an additional timeout mechanism in the monitoring loop is typically unnecessary. When a Slurm job times out, it gets terminated and removed from the active queue, causing `squeue -j $jobId` to return non-zero and break monitoring loops naturally. The job's final status can then be checked via `sacct` to determine if it failed due to timeout.

Applied to files:

  • jenkins/L0_Test.groovy
🪛 Ruff (0.14.10)
tests/integration/defs/conftest.py

2692-2692: f-string without any placeholders

Remove extraneous f prefix

(F541)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
jenkins/scripts/slurm_run.sh (2)

64-69: Non-root coverage-config wait extension is safe

Increasing the non-root sleep to 30 seconds after rank 0 writes the coverage config just adds margin to avoid races; behavior is otherwise unchanged and looks fine.


111-128: Pytest exit-code‑4 debug block is well-scoped

The new diagnostics for pytest_exit_code == 4 (listing directories, hashing conftest.py/pytest.ini, and checking importability) are narrowly gated and non-invasive for other exit codes. This should help triage the intermittent “unrecognized arguments” issue without affecting normal runs.

jenkins/scripts/slurm_install.sh (1)

15-22: Job/node‑scoped install lock improves Slurm concurrency handling

Using install_lock_job_${SLURM_JOB_ID}_node_${SLURM_NODEID}.lock and clearing any stale instance before the primary’s install sequence makes the lock both job‑ and node‑specific, avoiding cross‑job contention while preserving the simple “primary installs, others wait” contract. The 10s polling interval for followers is reasonable here.

Also applies to: 37-42

jenkins/L0_Test.groovy (5)

698-701: Centralized Slurm partition timeout for agent path looks correct

Logging "${stageName} Slurm partition timeout: ${partition.time}" and computing partitionTimeout = partition?.time ?: SlurmConfig.DEFAULT_TIMEOUT_SHORT once, then passing it into runInDockerOnNodeMultiStage / runInEnrootOnNode, keeps Jenkins timeouts aligned with the partition’s configured limit while retaining the existing 10‑minute safety margin in those helpers.


940-944: Added “Selected Cluster” log in sbatch path improves observability

Printing Selected Cluster: ${cluster.name} during the “[stageName] Initializing Test” stage mirrors the agent path logging and makes it easier to correlate sbatch jobs with their backing Slurm cluster in logs.


1515-1526: Keep‑list based workspace cleanup on Slurm retries is a solid robustness improvement

Defining filesToKeepWhenRetry and generating findKeepWhenRetryArgs to protect the run/install/bash-utils scripts, test/waives lists, and coverage config, then using:

find "${jobWorkspace}" -maxdepth 1 -mindepth 1 ${findKeepWhenRetryArgs} -exec rm -rf {} +

after cancelling any previous job (with a 120s grace window) gives you a clean job workspace between retries without risking deletion of core control files. The basename‑based matching and -maxdepth 1 guard make this safe even if additional subdirs exist under jobWorkspace.

Also applies to: 1231-1242


1678-1697: gb10x / DGX Spark handling and no‑driver GPU detection logic look consistent

Special‑casing type.contains("gb10x") to:

  • switch targetCloud to "nvks-sparks-cloud",
  • cap memory to 64Gi with appropriate tolerations, and
  • set NVIDIA_VISIBLE_DEVICES / NVIDIA_DRIVER_CAPABILITIES

matches the DGX Spark requirements, while the updated REQUIRED_NO_DRIVER_TYPES.any { type.contains(it) } plus the gb10x branch under that block correctly cover variants like dgx-h200-x4 and gb10x types without relying on exact string equality.


3251-3287: SBSA (GH200/GB10/GB200) test configs and Slurm wiring are coherent for AArch64

The new SBSA maps:

  • SBSATestConfigs (e.g., "GH200-TensorRT-Post-Merge-1", "GB10-PyTorch-1"),
  • SBSASlurmTestConfigs (single‑node SBSA Slurm tests), and
  • multiNodesSBSAConfigs (GB200 multi‑node variants, including perf‑sanity stages),

plus the AARCH64_TRIPLE‑guarded wiring that:

  • rebuilds parallelJobs from SBSATestConfigs using createKubernetesPodConfig(…, arch="arm64"),
  • adds SBSA Slurm jobs via runLLMTestlistOnSlurm with appropriate gpuCount / nodeCount, and
  • keeps fullSet in sync for stage-name validation,

all fit the existing pattern used for x86 stages and cleanly separate SBSA behavior behind env.targetArch == AARCH64_TRIPLE. I don’t see gaps in how these new stages are surfaced or filtered.

Also applies to: 3290-3319

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30272 [ run ] triggered by Bot. Commit: 38d3976

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30272 [ run ] completed with state SUCCESS. Commit: 38d3976
/LLM/main/L0_MergeRequest_PR pipeline #23308 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants