Skip to content

[https://nvbugs/5655885][fix] fix invalid instruction error in 2shot ar kernel on Ampere#9394

Merged
yilin-void merged 2 commits intoNVIDIA:mainfrom
yilin-void:fix/2shot_ar
Dec 15, 2025
Merged

[https://nvbugs/5655885][fix] fix invalid instruction error in 2shot ar kernel on Ampere#9394
yilin-void merged 2 commits intoNVIDIA:mainfrom
yilin-void:fix/2shot_ar

Conversation

@yilin-void
Copy link
Copy Markdown
Collaborator

@yilin-void yilin-void commented Nov 24, 2025

#9086
.acquire and .release qualifiers for fence instruction require sm_90 or higher

Summary by CodeRabbit

  • Performance
    • Enhanced barrier synchronization efficiency for newer GPU architectures
    • Maintained backward compatibility with older GPU models

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@yilin-void yilin-void requested a review from hyukn November 24, 2025 07:23
@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@yilin-void yilin-void self-assigned this Nov 24, 2025
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Nov 24, 2025

📝 Walkthrough

Walkthrough

Enhances barrier synchronization in Barrier::sync by introducing arch-specific memory operations for CUDA compute capability 9.0 and newer, replacing previous store operations with inline-assembly st.global.relaxed.sys and adding fence.release.sys while preserving legacy code path for older architectures.

Changes

Cohort / File(s) Change Summary
Barrier synchronization optimization
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
Added architecture-specific conditional compilation for CUDA >= 9.0 to use inline-assembly store (st.global.relaxed.sys) and release fence instead of previous st_flag path. Legacy code path retained for older architectures. ABA-avoidance loop now arch-gated.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–30 minutes

  • Inline assembly correctness: Verify st.global.relaxed.sys syntax and memory ordering semantics match CUDA PTX documentation
  • Architecture checks: Confirm __CUDA_ARCH__ conditional logic correctly routes older vs. newer paths
  • Memory fence semantics: Validate that fence.release.sys provides required synchronization for newer architectures
  • Backward compatibility: Ensure legacy st_flag path remains functional for compute capabilities < 9.0

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the template structure with no actual content provided. Required sections like Description and Test Coverage are empty, and the PR checklist is unchecked except for one box. Fill in the Description section explaining the issue and solution, add Test Coverage details, and ensure the PR checklist items are properly reviewed and checked.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title follows the required format with ticket ID and type, and clearly identifies the fix for an invalid instruction error in the 2shot AR kernel on Ampere architecture.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu (1)

137-147: Confirm memory‑ordering semantics of relaxed store + post‑loop fence.release.sys

On SM_90+ the barrier flag update changed from a per‑flag release store:

  • old behavior (via st_flag): st.global.release.sys.b32 for each m_target_flag + flag_idx * NRanks

to:

  • new behavior: st.global.relaxed.sys.b32 in the loop, followed by a single fence.release.sys after the loop, while readers still use ld.global.acquire.sys.b32 via ld_flag.

This is a non‑trivial change in synchronization semantics:

  • Previously, each individual flag write was a release operation paired with the acquire load in ld_flag, which is the standard pattern for a release/acquire barrier.
  • Now, the flag writes are relaxed, and the only release operation is a fence that executes after all the relaxed flag stores.

Given how subtle the PTX memory model is, can you please:

  1. Double‑check that “relaxed flag store(s) followed by fence.release.sys” is formally equivalent (for the purposes of this barrier) to the prior per‑store st.global.release.sys.b32, i.e., that a thread performing ld.global.acquire.sys.b32 on m_current_flag cannot observe the updated flag without also having visibility of all writes the barrier is meant to order?

  2. Consider whether the fence should instead precede the relaxed stores (or the stores should remain release) to more obviously preserve the original release/acquire pattern, unless there is a documented PTX guarantee that the current ordering is safe.

If you’ve validated this against the PTX ISA docs or internal memory‑model guidance, a short code comment here explaining the rationale (and why fence‑after is sufficient) would make this much easier to maintain.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6e5384d and 25c6bb8.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu (1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels, the <sstream> header is not needed as an explicit include in config.cu because it's provided transitively through other headers. Local compilation testing confirms this works without the explicit include.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-02T13:42:44.885Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-08-14T15:36:37.610Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/kernels/mlaKernels.cu:436-439
Timestamp: 2025-08-14T15:36:37.610Z
Learning: CUDA kernels prioritize performance and should avoid runtime bounds checking or conditional operations that cause branching/warp divergence. Input validation should be done at the host level before kernel launch, not per-thread in the kernel.

Applied to files:

  • cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25514 [ run ] triggered by Bot. Commit: 25c6bb8

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25514 [ run ] completed with state SUCCESS. Commit: 25c6bb8
/LLM/main/L0_MergeRequest_PR pipeline #19320 completed with status: 'FAILURE'

@Funatiq
Copy link
Copy Markdown
Collaborator

Funatiq commented Nov 24, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25553 [ run ] triggered by Bot. Commit: 25c6bb8

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25553 [ run ] completed with state SUCCESS. Commit: 25c6bb8
/LLM/main/L0_MergeRequest_PR pipeline #19352 completed with status: 'FAILURE'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25630 [ run ] triggered by Bot. Commit: fb356c1

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25630 [ run ] completed with state FAILURE. Commit: fb356c1
LLM/main/L0_MergeRequest_PR #19418 (Blue Ocean) completed with status: ABORTED

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25808 [ run ] triggered by Bot. Commit: e008fc7

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25808 [ run ] completed with state SUCCESS. Commit: e008fc7
/LLM/main/L0_MergeRequest_PR pipeline #19575 completed with status: 'FAILURE'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25976 [ run ] triggered by Bot. Commit: 0d095c6

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #25976 [ run ] completed with state SUCCESS. Commit: 0d095c6
/LLM/main/L0_MergeRequest_PR pipeline #19700 completed with status: 'FAILURE'

@Funatiq
Copy link
Copy Markdown
Collaborator

Funatiq commented Nov 27, 2025

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #26040 [ run ] triggered by Bot. Commit: 0d095c6

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #26040 [ run ] completed with state SUCCESS. Commit: 0d095c6
/LLM/main/L0_MergeRequest_PR pipeline #19764 completed with status: 'FAILURE'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #26541 [ run ] completed with state SUCCESS. Commit: 3d01255
/LLM/main/L0_MergeRequest_PR pipeline #20183 completed with status: 'FAILURE'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #26715 [ run ] triggered by Bot. Commit: 3d01255

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #26715 [ run ] completed with state SUCCESS. Commit: 3d01255
/LLM/main/L0_MergeRequest_PR pipeline #20332 completed with status: 'FAILURE'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #26897 [ run ] triggered by Bot. Commit: 3d01255

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #26897 [ run ] completed with state SUCCESS. Commit: 3d01255
/LLM/main/L0_MergeRequest_PR pipeline #20492 completed with status: 'FAILURE'

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #27085 [ run ] triggered by Bot. Commit: 10e8f04

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #27085 [ run ] completed with state SUCCESS. Commit: 10e8f04
/LLM/main/L0_MergeRequest_PR pipeline #20662 completed with status: 'FAILURE'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #27279 [ run ] triggered by Bot. Commit: 10e8f04

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #27279 [ run ] completed with state SUCCESS. Commit: 10e8f04
/LLM/main/L0_MergeRequest_PR pipeline #20831 completed with status: 'ABORTED'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #27443 [ run ] triggered by Bot. Commit: 22abf4b

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #27443 [ run ] completed with state SUCCESS. Commit: 22abf4b
/LLM/main/L0_MergeRequest_PR pipeline #20970 completed with status: 'FAILURE'

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #27618 [ run ] triggered by Bot. Commit: 22abf4b

@yilin-void
Copy link
Copy Markdown
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #28005 [ run ] triggered by Bot. Commit: 22abf4b

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #28005 [ run ] completed with state SUCCESS. Commit: 22abf4b
/LLM/main/L0_MergeRequest_PR pipeline #21388 completed with status: 'SUCCESS'

@yilin-void yilin-void merged commit dda7658 into NVIDIA:main Dec 15, 2025
5 checks passed
sherry-1001 pushed a commit to sherry-1001/TensorRT-LLM that referenced this pull request Dec 16, 2025
…ar kernel on Ampere (NVIDIA#9394)

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 19, 2025
…ar kernel on Ampere (NVIDIA#9394)

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
yuanjingx87 pushed a commit that referenced this pull request Jan 14, 2026
…or in 2shot ar kernel on Ampere (#9394)

Signed-off-by: Yibin Li <yibinl@nvidia.com>
yuanjingx87 pushed a commit that referenced this pull request Jan 14, 2026
Chery-pick: [https://nvbugs/5655885][fix] fix invalid instruction error in 2shot ar kernel on Ampere (#9394)

See merge request ftp/tekit!9896

Signed-off-by: Yibin Li <yibinl@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants