[https://nvbugs/5655885][fix] fix invalid instruction error in 2shot ar kernel on Ampere#9394
Conversation
|
/bot run |
📝 WalkthroughWalkthroughEnhances barrier synchronization in Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–30 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu (1)
137-147: Confirm memory‑ordering semantics of relaxed store + post‑loopfence.release.sysOn SM_90+ the barrier flag update changed from a per‑flag release store:
- old behavior (via
st_flag):st.global.release.sys.b32for eachm_target_flag + flag_idx * NRanksto:
- new behavior:
st.global.relaxed.sys.b32in the loop, followed by a singlefence.release.sysafter the loop, while readers still useld.global.acquire.sys.b32viald_flag.This is a non‑trivial change in synchronization semantics:
- Previously, each individual flag write was a release operation paired with the acquire load in
ld_flag, which is the standard pattern for a release/acquire barrier.- Now, the flag writes are relaxed, and the only release operation is a fence that executes after all the relaxed flag stores.
Given how subtle the PTX memory model is, can you please:
Double‑check that “relaxed flag store(s) followed by
fence.release.sys” is formally equivalent (for the purposes of this barrier) to the prior per‑storest.global.release.sys.b32, i.e., that a thread performingld.global.acquire.sys.b32onm_current_flagcannot observe the updated flag without also having visibility of all writes the barrier is meant to order?Consider whether the fence should instead precede the relaxed stores (or the stores should remain
release) to more obviously preserve the original release/acquire pattern, unless there is a documented PTX guarantee that the current ordering is safe.If you’ve validated this against the PTX ISA docs or internal memory‑model guidance, a short code comment here explaining the rationale (and why fence‑after is sufficient) would make this much easier to maintain.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu(1 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device allreduce implementation (cpp/tensorrt_llm/thop/allreduceOp.cpp), the goto pattern in runNCCLAllReduceDeviceFusion is intentionally used for future extensibility, allowing multiple switch cases to fallback to the default handler. While not aesthetically ideal, this pattern supports adding more fusion cases later that can reuse the same fallback logic.
Applied to files:
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels, the <sstream> header is not needed as an explicit include in config.cu because it's provided transitively through other headers. Local compilation testing confirms this works without the explicit include.
Applied to files:
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.
Applied to files:
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-02T13:42:44.885Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.
Applied to files:
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Applied to files:
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
📚 Learning: 2025-08-14T15:36:37.610Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/kernels/mlaKernels.cu:436-439
Timestamp: 2025-08-14T15:36:37.610Z
Learning: CUDA kernels prioritize performance and should avoid runtime bounds checking or conditional operations that cause branching/warp divergence. Input validation should be done at the host level before kernel launch, not per-thread in the kernel.
Applied to files:
cpp/tensorrt_llm/kernels/communicationKernels/allReduceFusionKernels.cu
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
|
PR_Github #25514 [ run ] triggered by Bot. Commit: |
|
PR_Github #25514 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #25553 [ run ] triggered by Bot. Commit: |
|
PR_Github #25553 [ run ] completed with state |
25c6bb8 to
fb356c1
Compare
|
/bot run |
|
PR_Github #25630 [ run ] triggered by Bot. Commit: |
|
PR_Github #25630 [ run ] completed with state |
fb356c1 to
e008fc7
Compare
|
/bot run |
|
PR_Github #25808 [ run ] triggered by Bot. Commit: |
|
PR_Github #25808 [ run ] completed with state |
e008fc7 to
6efc93f
Compare
|
/bot run |
6efc93f to
0d095c6
Compare
|
/bot run |
|
PR_Github #25976 [ run ] triggered by Bot. Commit: |
|
PR_Github #25976 [ run ] completed with state |
|
/bot run |
|
PR_Github #26040 [ run ] triggered by Bot. Commit: |
|
PR_Github #26040 [ run ] completed with state |
|
/bot run |
|
PR_Github #26541 [ run ] completed with state |
|
/bot run |
|
PR_Github #26715 [ run ] triggered by Bot. Commit: |
|
PR_Github #26715 [ run ] completed with state |
|
/bot run |
|
PR_Github #26897 [ run ] triggered by Bot. Commit: |
|
PR_Github #26897 [ run ] completed with state |
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
3d01255 to
10e8f04
Compare
|
/bot run |
|
PR_Github #27085 [ run ] triggered by Bot. Commit: |
|
PR_Github #27085 [ run ] completed with state |
|
/bot run |
|
PR_Github #27279 [ run ] triggered by Bot. Commit: |
|
PR_Github #27279 [ run ] completed with state |
|
/bot run |
|
PR_Github #27443 [ run ] triggered by Bot. Commit: |
|
PR_Github #27443 [ run ] completed with state |
|
/bot run |
|
PR_Github #27618 [ run ] triggered by Bot. Commit: |
|
/bot run |
|
PR_Github #28005 [ run ] triggered by Bot. Commit: |
|
PR_Github #28005 [ run ] completed with state |
…ar kernel on Ampere (NVIDIA#9394) Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
…ar kernel on Ampere (NVIDIA#9394) Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
…or in 2shot ar kernel on Ampere (#9394) Signed-off-by: Yibin Li <yibinl@nvidia.com>
Chery-pick: [https://nvbugs/5655885][fix] fix invalid instruction error in 2shot ar kernel on Ampere (#9394) See merge request ftp/tekit!9896 Signed-off-by: Yibin Li <yibinl@nvidia.com>
#9086
.acquire and .release qualifiers for fence instruction require sm_90 or higher
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.