Skip to content

Conversation

@brb-nv
Copy link
Collaborator

@brb-nv brb-nv commented Jan 7, 2026

Description

This MR makes changes to support DP alongside helix parallelism in TRTLLM.

Test Coverage

$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[nccl-cudagraph:none-pp1dp2cp2] -s -v
$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[nccl-cudagraph:with_padding-pp1dp2cp2] -s -v
$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[fifo-cudagraph:none-pp1dp2cp2] -s -v
$ pytest tests/integration/defs/accuracy/test_disaggregated_serving.py::TestDeepSeekV3Lite::test_auto_dtype_with_helix[fifo-cudagraph:with_padding-pp1dp2cp2] -s -v

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Enhanced support for context parallelism configuration in distributed computing
    • Improved debugging with additional rank and partition information in logs
  • Bug Fixes

    • Corrected rank calculations for tensor-parallel and context-parallel setup
    • Fixed token synchronization across GPU clusters in multi-GPU configurations
  • Tests

    • Expanded test coverage for context parallelism and advanced parallel configurations

✏️ Tip: You can customize this high-level summary in your review settings.

@brb-nv brb-nv requested review from a team as code owners January 7, 2026 03:41
@brb-nv brb-nv changed the title User/brb/dp request flow mr [TRTLLM-10264][feat] Support attention DP + Helix CP Jan 7, 2026
@brb-nv
Copy link
Collaborator Author

brb-nv commented Jan 7, 2026

/bot run --disable-fail-fast

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

📝 Walkthrough

Walkthrough

This PR introduces context parallelism (CP) support across the system by reworking rank calculations for data parallelism, tensor parallelism, and context parallelism in tensor-parallel setups. Changes span backend cache management, attention mechanisms, execution logic, and test infrastructure to enable CP-aware synchronization and reduction operations.

Changes

Cohort / File(s) Summary
Backend Rank & Group Logic
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
Rework DPRank calculation from multi-step subtraction to direct derivation (DPRank = tensorRank / TPSizeInDPGroup); adjust data group and TP-in-DP group communications to use new rank formulas; add clarifying comments for (PP, DP, TP, CP) topology.
Cache Transmission CP Support
cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
Extend TargetRanksInfoForDP to compute domain-level CP information (mDomainCPSize, targetPeerPPLayerNum); expand debug logging with rank/partition details; update returned TargetRanksInfo structure to include CP metrics.
C++ Unit Tests
cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
Rework rank calculations with CP-aware scheme (mCpRank, updated mTpRank formula); update DP test paths to build CP-aware sequences with cpMetaData; add CP-related input filtering; introduce CP+DP and MLA test instantiations; refactor naming (contextTragetInfo → contextTargetInfo, verfiyGeneration → verifyGeneration).
Model Attention DP/CP Reduction
tensorrt_llm/_torch/models/modeling_deepseekv3.py
Add needs_tp_reduce and needs_cp_reduce flags to determine TP vs CP-group reduction for DeepSeekV3Attention; apply can_skip_for_attention_dp logic to conditionally skip attention all-reduce; adjust DeepseekV3DecoderLayer initialization to account for DP and CP mapping presence.
Attention Module CP Mapping
tensorrt_llm/_torch/modules/attention.py
Introduce conditional mapping adjustment for o_proj when enable_attention_dp and cp_size > 1; refactor world_size and tp_size calculations to fold CP into TP for DP groups; apply changes to both Attention and MLA initialization paths.
Executor Request Queue CP Aggregation
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
Add CP-aware aggregation in _fetch_new_requests_attention_dp: transform responses_list from per-rank to per-DP-group entries by summing tokens across CP ranks; minor comment formatting updates.
Executor CP Synchronization
tensorrt_llm/_torch/pyexecutor/py_executor.py
Add _sync_sampled_tokens_across_cp helper to broadcast tokens from cp_rank 0 to all CP ranks within DP group; integrate CP-aware handling in _update_requests, _enqueue_responses, and _handle_responses; add deduplication logic for tp_gather results in multi-CP setups.
Integration Tests
tests/integration/defs/accuracy/test_disaggregated_serving.py
Extend test_auto_dtype_with_helix with enable_attention_dp parameter; add parametrization with ["adp_off", "adp_on"] ids; propagate flag to generation server config; enable print_iter_log in context and generation server configs.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant PyExecutor as PyExecutor<br/>(CP-aware)
    participant CacheTransceiver as CacheTransceiver<br/>(Rank Calc)
    participant Attention as Attention Module<br/>(CP Mapping)
    participant CommLib as Communication<br/>(AllReduce/Broadcast)
    
    rect rgb(200, 220, 255)
    Note over Client,CommLib: Initialization Phase: CP-aware Rank Calculation
    Client->>PyExecutor: Create executor with cp_size > 1
    PyExecutor->>CacheTransceiver: Compute DPRank = tensorRank / TPSizeInDPGroup
    CacheTransceiver->>CacheTransceiver: Calculate TargetRanksInfo with mDomainCPSize
    CacheTransceiver-->>PyExecutor: Return CP-aware rank metadata
    PyExecutor->>Attention: Initialize with CP mapping (fold CP into TP)
    Attention->>Attention: Compute mapping_o: world_size = original_tp_size*pp_size*cp_size
    Attention-->>PyExecutor: Setup complete
    end
    
    rect rgb(220, 240, 220)
    Note over Client,CommLib: Forward Pass: CP Synchronization
    Client->>PyExecutor: Process batch (multi-CP setup)
    PyExecutor->>PyExecutor: Execute forward with attention DP
    alt enable_attention_dp && cp_size > 1
        PyExecutor->>Attention: needs_cp_reduce = true
        Attention->>CommLib: Perform CP-group reduction (not TP-wide)
        CommLib-->>Attention: Reduction complete within CP
    else enable_attention_dp only
        PyExecutor->>Attention: needs_tp_reduce = true
        Attention->>CommLib: Perform TP reduction
        CommLib-->>Attention: TP reduction complete
    end
    end
    
    rect rgb(240, 220, 220)
    Note over Client,CommLib: Sampling Phase: Token Synchronization
    PyExecutor->>PyExecutor: Sample tokens at cp_rank 0
    PyExecutor->>PyExecutor: _sync_sampled_tokens_across_cp called
    PyExecutor->>CommLib: cp_broadcast(new_tokens, src=0)
    CommLib-->>PyExecutor: All CP ranks receive same tokens
    PyExecutor->>PyExecutor: _enqueue_responses with dedup for tp_gather
    PyExecutor-->>Client: Return synchronized responses
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive PR description provides basic information about DP + helix parallelism support and test coverage, but lacks detailed explanation of implementation changes and rationale. Expand the Description section to explain what changes were made, why they were necessary, and how they enable DP with helix parallelism. Clarify the technical approach and any important design decisions.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title '[TRTLLM-10264][feat] Support attention DP + Helix CP' directly and clearly describes the main feature being added: support for attention data parallelism with Helix context parallelism.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

2539-2563: Add NVIDIA copyright header and document the tp_gather stride ordering for the CP deduplication logic.

The CP‑aware response gathering logic is correct: tp_group in mapping.py confirms ranks are interleaved at stride cp_size, so responses_list[::self.dist.cp_size] correctly selects one CP replica per TP rank.

However, py_executor.py is missing the required NVIDIA copyright header. Per the coding guidelines, all source files must include a copyright header with the year of latest meaningful modification.

Additionally, document the ordering invariant explicitly. The current inline comment at line 2557 should be expanded to clarify: "tp_gather returns data ordered by tp_rank, with CP replicas strided by cp_size (determined by mapping.py's tp_group construction), so deduplication by stride selects exactly one replica per TP rank."

🤖 Fix all issues with AI agents
In @tensorrt_llm/_torch/pyexecutor/executor_request_queue.py:
- Around line 370-384: The CP-aggregation loop assumes responses_list has
self.dist.tp_size * self.dist.cp_size entries but tp_allgather only returns
tp_size entries; add a defensive check before the aggregation loop in
executor_request_queue.py: compute expected = self.dist.tp_size *
self.dist.cp_size and if len(responses_list) < expected, either raise a clear
ValueError (including expected vs actual lengths) or fall back to aggregating
using the available entries by grouping with bounds-safe slicing; ensure you
reference self.dist.cp_size, self.dist.tp_size and responses_list so the code
avoids out-of-bounds indexing when building aggregated_responses.
🧹 Nitpick comments (4)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (2)

1200-1209: New disable_attn_allreduce condition correctly keeps all‑reduce on for DP+CP, but still depends on TP size.

The updated logic:

has_cp = mapping_with_cp is not None and mapping_with_cp.cp_size > 1
can_skip_for_attention_dp = self.enable_attention_dp and not has_cp
self.disable_attn_allreduce = (
    self.fusion_config.PRE_MOE_FUSION
    or self.fusion_config.PRE_MLP_FUSION
    or self.mapping.tp_size == 1
    or can_skip_for_attention_dp
)

nicely fixes the earlier issue where enable_attention_dp unconditionally disabled attention all‑reduce — now DP+CP keeps enable_allreduce=True so CP reductions can still happen.

One subtle point: even in DP+CP cases, self.mapping.tp_size here is the repurposed TP size (after repurpose_helix_cp_to_tp). If that ever becomes 1 (e.g., helix CP but TP disabled), disable_attn_allreduce flips to True, and you would rely entirely on the CP‑side reduction inside attention. That’s fine if MLA’s CP path never looks at AllReduceParams.enable_allreduce, but if you later gate CP reductions on that same flag, TP==1 would unexpectedly disable CP as well.

Consider either:

  • documenting that AllReduceParams.enable_allreduce controls TP‑space reduction only and CP reduction is independent, or
  • tightening the condition to something like or (self.mapping.tp_size == 1 and not has_cp) if you ever reuse the same flag for CP.

Right now, with the current MLA/AllReduce wiring, the behavior for the targeted DeepSeek V3 Lite helix+ADP configs is correct; this is mainly a guardrail against future refactors.


1123-1135: The CP/DP-aware reduce_output logic is correct; add a docstring note on mapping_with_cp requirement.

The split into:

needs_tp_reduce = not self.enable_attention_dp and self.mapping.tp_size > 1
needs_cp_reduce = mapping_with_cp is not None and mapping_with_cp.cp_size > 1
...
reduce_output=needs_tp_reduce or needs_cp_reduce

correctly handles all configurations:

  • pure TP (no ADP, no CP) → TP reduce only
  • ADP without CP → no reduce
  • any CP>1 (with or without ADP) → CP-group reduce

This relies on mapping_with_cp being the original mapping (before helix CP→TP repurposing), which is properly set up in DeepseekV3ForCausalLM.__init__ (line 1682) and passed to layers (line 1703). The None case is safely handled.

However, the API is somewhat implicit: DeepseekV3DecoderLayer accepts mapping_with_cp: Optional[Mapping] = None, but its correctness for CP-enabled configs depends on this parameter being provided. Consider adding a docstring note in the constructor documenting that CP support requires passing the original (pre-repurposed) mapping, to prevent future confusion if the class is used outside the canonical DeepseekV3ForCausalLM path.

tests/integration/defs/accuracy/test_disaggregated_serving.py (1)

894-940: ADP toggle in Helix test is wired correctly; be aware of log volume from print_iter_log=True.

The added parameterization:

@pytest.mark.parametrize("enable_attention_dp", [False, True], ids=["adp_off", "adp_on"])
def test_auto_dtype_with_helix(self, enable_attention_dp, comms_medium, cuda_graph_config, gen_pp, gen_tp, gen_cp):
    ...
    gen_server_config = {
        ...
        "enable_attention_dp": enable_attention_dp,
        "print_iter_log": True,
    }

cleanly exercises both ADP on/off for the DeepSeek V3 Lite helix+CP configurations, and the signature matches the decorator order.

Enabling print_iter_log on both ctx and gen servers is useful for debugging these distributed setups, but it can significantly increase log volume in CI. If logs become unwieldy, consider either:

  • gating print_iter_log behind an environment variable, or
  • leaving it on only for failing parametrizations via a lighter‑weight logging setting.

From a correctness standpoint, the new test plumbing looks good and should provide the coverage you want for DP×CP interactions.

tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

2382-2423: The cp_broadcast integration and CPU↔CUDA handling look correct.

The MPIDist assertion, cp_local_root = 0, and the tensor Device↔CPU shuttling are all sound. The _update_requests guard (cp_size > 1 and sample_state.host is not None) properly avoids unnecessary work in non-CP or pre-host states.

The implementation correctly broadcasts new_tokens and finish_reasons (with defensive hasattr() check) from cp_rank 0 to other CP ranks. The cp_broadcast() method uses torch.distributed.broadcast(), which modifies tensors in-place on all ranks—passing the actual tensor on each rank is the intended usage pattern, not None on non-roots.

One minor consideration: the code currently syncs only the direct outputs of sampling (new_tokens and finish_reasons). If other SampleState.host fields such as sequence_lengths, log_probs, or cum_log_probs are also produced on cp_rank 0 and needed consistently across CP ranks for subsequent scheduling decisions, you may want to include them here as well. This depends on whether those fields represent rank-local state or shared sampling metadata.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3fec7e4 and 48accd0.

📒 Files selected for processing (8)
  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/attention.py
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/integration/defs/accuracy/test_disaggregated_serving.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo)
Prefer const or constexpr variables over #defines whenever possible
A variable that is not modified after its initialization should be declared as const
For naming of constants in C++, use uppercase snakecase with prefix 'k' (e.g., kDIGIT_NUM)
Except for 0, nullptr, true, and false, all other literals should only be used for variable initialization and not in comparisons or expressions
Use Allman indentation style for brace notation in C++ code
Put the semicolon for an empty for or while loop in a new line
The statement forming the body of a switch, while, do..while, or for statement must be a compound statement (use brace-delimited statements)
If and else statements should always be followed by brace-delimited statements, even if empty or a single statement
C++ filenames should use camelCase with first letter lowercase (e.g., thisIsAFilename.cpp)
All types (including class names) in C++ should use PascalCase with uppercase first letter (e.g., FooBarClass)
Local variables, methods, and namespaces in C++ should use camelCase with first letter lowercase (e.g., localFooBar)
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camelCase prefixed with 'g' (e.g., gDontUseGlobalFoos)
Non-magic-number global variables that are static or defined in an anonymous namespace should use camelCase prefixed with 's' (e.g., sMutableStaticGlobal)
Locally visible static variables should use camelCase with 's' as the first letter (e.g., static std::once_flag sFlag;)
Public, private, and protected class member variables should use camelCase prefixed with 'm' (e.g., mNbFooValues)
Do not use Hungarian notation in C++ except for 'apps hungarian' (e.g., 'nb' to indicate count: mNbLayers)
If a constructor parameter name conflicts with a public me...

Files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.{cpp,cc,cxx,cu}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,cc,cxx,cu}: Use smart pointers for allocating objects on the heap in C++
Prefer unique_ptr for single resource ownership and shared_ptr for shared resource ownership in C++. Use weak_ptr only in exceptional cases
In C++ function calls where parameters are not obvious, use inline C comments to document the parameter (e.g., doSomeOperation(/* checkForErrors = */ false);)
Use the least forceful cast necessary in C++, or no cast if possible
Casting a pointer to void* in C++ should be implicit (except if removing const)
Casting in C++ should not remove any const or volatile qualification from the type of a pointer or reference
Do not use C-style casts (other than void casts) and functional notation casts (other than explicit constructor calls) in C++
Casting from void* to T* in C++ should be done with static_cast, not reinterpret_cast
Use reinterpret_cast in C++ as a last resort, where const_cast and static_cast won't work
Avoid dynamic_cast in C++
Do not use assignment operator in C++ subexpressions (e.g., x = y = z or if (x = y))
When practical, a C++ switch statement controlled by an enum should have a case for each enum value and not have a default clause
C++ switch statements should be well structured as structured multi-way branches, not as 'glorified gotos'
In C++ switch statements, prohibit fall-through except from one case label to another. Each case clause must be terminated with a break or throw
Do not end a C++ case clause with return; use break or throw instead
If a C++ switch clause is a compound statement, put the break inside the braces
Do not use C library functions in C++ whenever possible. Use C++ alternatives like brace initialization or std::fill_n() instead of memset()

Files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.{h,hpp,hxx,cpp,cc,cxx,cu,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All C++ class templates, function templates, class template member functions, and class template static members must be instantiated at least once

Files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification

Files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/attention.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/integration/defs/accuracy/test_disaggregated_serving.py
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g., some_file.py)
Python classes should use PascalCase (e.g., class SomeClass)
Python functions and methods should use snake_case (e.g., def my_awesome_function():)
Python local variables should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL)
Python constants should use upper snake_case (e.g., MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format """<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic

Files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/attention.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/integration/defs/accuracy/test_disaggregated_serving.py
🧠 Learnings (16)
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
Repo: NVIDIA/TensorRT-LLM PR: 6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
📚 Learning: 2025-09-02T13:42:44.885Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu
  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-09-24T03:31:28.908Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-12-12T03:27:18.859Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:18.859Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, when reviewing code that iterates through requests, ensure it does not convert excessive data into Python lists. Instead, the code should use torch.gather or indexing to gather only the data that will be used in the for loop before converting to Python lists. This minimizes data movement and improves performance.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-12-12T03:27:08.565Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:08.565Z
Learning: In files under tensorrt_llm/_torch/pyexecutor, avoid accessing torch.Tensor objects inside for-loops when iterating over requests. Convert batched tensors to Python lists beforehand using tensor.tolist(), and then iterate over those lists. This improves performance by reducing tensor-bound operations inside hot loops. Apply this pattern to similar code paths that process batches to access simple Python data structures (lists) inside loops.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
📚 Learning: 2025-09-29T15:14:28.503Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 8063
File: tensorrt_llm/lora_manager.py:1080-1112
Timestamp: 2025-09-29T15:14:28.503Z
Learning: In tensorrt_llm/lora_manager.py, when calculating part_sizes for attn_qkv fused LoRA modules, the sizes are correctly multiplied by tp_size because model_config.num_heads and model_config.num_kv_heads are already divided by tp_size (per-TP-rank values), so multiplication is needed to get the original full concatenated dimension size. The interleave_fused_lora_weights_for_tp function provides proper validation with asserts for total size and TP divisibility.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/attention.py
📚 Learning: 2025-09-29T15:14:28.503Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 8063
File: tensorrt_llm/lora_manager.py:1080-1112
Timestamp: 2025-09-29T15:14:28.503Z
Learning: In tensorrt_llm/lora_manager.py, when calculating part_sizes for attn_qkv fused LoRA modules, the sizes are correctly multiplied by tp_size because model_config.num_heads and model_config.num_kv_heads are already divided by tp_size (per-TP-rank values), so multiplication is needed to get the original full concatenated dimension size. The interleave_fused_lora_weights_for_tp function provides proper validation.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/modules/attention.py
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.

Applied to files:

  • tensorrt_llm/_torch/modules/attention.py
📚 Learning: 2025-10-13T19:45:03.518Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: tests/unittest/_torch/multi_gpu/test_nccl_device.py:138-149
Timestamp: 2025-10-13T19:45:03.518Z
Learning: In test_nccl_device.py, the NCCL device AllReduce implementation compares the entire residual tensor on each rank, unlike the UB implementation which compares per-rank chunks. The residual chunking calculations in the test are intentionally overridden to reflect this design difference.

Applied to files:

  • cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
🧬 Code graph analysis (5)
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)
tensorrt_llm/_torch/distributed/communicator.py (4)
  • tp_allgather (438-439)
  • tp_allgather (690-703)
  • cp_size (56-57)
  • tp_size (64-65)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (2)
tensorrt_llm/_torch/distributed/communicator.py (2)
  • tp_size (64-65)
  • cp_size (56-57)
tensorrt_llm/mapping.py (1)
  • has_cp (291-292)
tensorrt_llm/_torch/modules/attention.py (2)
tensorrt_llm/mapping.py (3)
  • Mapping (361-540)
  • rank (199-200)
  • rank (203-210)
tensorrt_llm/llmapi/llm_args.py (2)
  • world_size (557-558)
  • world_size (567-571)
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
  • SampleState (112-118)
tensorrt_llm/_torch/distributed/communicator.py (4)
  • cp_broadcast (124-125)
  • cp_broadcast (430-436)
  • cp_broadcast (748-759)
  • cp_size (56-57)
tensorrt_llm/mapping.py (1)
  • tp_group (563-564)
cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp (1)
cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu (2)
  • TargetRanksInfoForDP (97-257)
  • TargetRanksInfoForDP (97-98)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Check PR Checklist Resolution
  • GitHub Check: Check PR Checklist Resolution
  • GitHub Check: Pre-commit Check
🔇 Additional comments (9)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (1)

155-165: LGTM — DPRank calculation simplified correctly.

The refactored calculation tensorRank / TPSizeInDPGroup is mathematically equivalent to the previous two-step approach (since integer division floors the result). The updated comments clearly document the CP-aware rank layout assumption, improving maintainability.

tensorrt_llm/_torch/modules/attention.py (1)

935-970: Well-documented mapping adjustment for o_proj with CP-aware DP groups.

The dual-branch approach correctly handles:

  1. Attention DP + CP: Each DP group gets an independent all-reduce among its CP ranks by using pp_size * original_tp_size to separate groups while tp_size=cp_size enables the CP-only reduction.
  2. Non-DP or CP=1: Standard behavior folding CP into TP for a unified all-reduce.

The inline comments explaining the topology and concrete example are helpful for maintainability.

cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu (1)

234-242: LGTM! Enhanced logging for CP-aware domain calculations.

The expanded debug logging provides better visibility into the CP-domain rank calculations, which is valuable for debugging CP+DP scenarios.

cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp (6)

555-558: LGTM! CP-aware rank decomposition aligns with TargetRanksInfoForDP.

The rank formulas correctly decompose mRankInInstance using the same ordering as the rank construction formula in TargetRanksInfoForDP (line 215 in cacheSplitConcat.cu): ppRank * (tpNum * cpNum) + tpRank * cpNum + cpRank.


888-900: LGTM! CP-aware request construction for DP tests.

The CP metadata computation and adjusted sequence length correctly handle block distribution across CP ranks, matching the pattern in makeLlmRequest (lines 868-874).


1440-1459: LGTM! Test filtering ensures minimum block distribution for CP.

The filtering logic ensures each CP rank receives at least one block by requiring len > tokensPerBlock * (genCp - 1), which guarantees at least genCp blocks total for distribution across CP ranks.


1847-1884: LGTM! Comprehensive CP+DP test coverage for MLA.

The new test instantiations cover critical scenarios:

  • AsymmetricCaseTestWithCPAndDPForMLA0: Context TP/PP with Generation CP+DP
  • AsymmetricCaseTestWithCPAndDPForMLA1: Context DP with Generation CP+DP

These test cases align with the PR's objective to support data parallelism alongside context parallelism.


2297-2315: LGTM! CP-aware DP rank calculation and typo fix.

The updated DP rank formulas correctly account for context parallelism:

  • (rank % (tpNum * cpNum)) / cpNum properly extracts the TP rank within a PP group when CP is present.
  • Typo fix: contextTragetInfocontextTargetInfo

Both changes align with the CP-aware rank decomposition logic.


2401-2422: LGTM! Typo fix and CP-aware updates in verifyGeneration.

Changes mirror the verifyContext function updates:

  • Typo fix: verfiyGenerationverifyGeneration
  • CP-aware DP rank calculations applied consistently
  • Updated assertions use contextTargetInfo with correct field names

All changes align with the CP-aware rank decomposition pattern.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30827 [ run ] triggered by Bot. Commit: 48accd0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30827 [ run ] completed with state SUCCESS. Commit: 48accd0
/LLM/main/L0_MergeRequest_PR pipeline #23808 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@brb-nv
Copy link
Collaborator Author

brb-nv commented Jan 7, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30919 [ run ] triggered by Bot. Commit: 48accd0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30919 [ run ] completed with state SUCCESS. Commit: 48accd0
/LLM/main/L0_MergeRequest_PR pipeline #23882 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@brb-nv brb-nv force-pushed the user/brb/dp-request-flow-mr branch 2 times, most recently from 7e10297 to 1fd567f Compare January 7, 2026 23:16
@brb-nv
Copy link
Collaborator Author

brb-nv commented Jan 8, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30952 [ run ] triggered by Bot. Commit: 60400b4

@brb-nv brb-nv force-pushed the user/brb/dp-request-flow-mr branch from 60400b4 to 882c67a Compare January 8, 2026 03:29
@tensorrt-cicd
Copy link
Collaborator

PR_Github #30952 [ run ] completed with state SUCCESS. Commit: 60400b4
/LLM/main/L0_MergeRequest_PR pipeline #23913 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Copy link
Member

@Tabrizian Tabrizian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disagg changes LGTM.

@brb-nv brb-nv force-pushed the user/brb/dp-request-flow-mr branch from 3f6bff0 to 98ec37a Compare January 9, 2026 03:23
@brb-nv
Copy link
Collaborator Author

brb-nv commented Jan 9, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31184 [ run ] triggered by Bot. Commit: 98ec37a

// DPRank is derived from the tensor parallel rank, which already accounts for CP.
// Layout: rank = ppRank * (TP * CP) + tpRank * CP + cpRank.
// getTensorParallelRank() correctly extracts tpRank regardless of CP.
int DPRank = worldConfig.getTensorParallelRank() / TPSizeInDPGroup;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we directly use int DPRank = mCacheState->getParallelConfig().mDPrank? Also cc the author of original code @chuangz0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31184 [ run ] completed with state SUCCESS. Commit: 98ec37a
/LLM/main/L0_MergeRequest_PR pipeline #24096 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@brb-nv brb-nv force-pushed the user/brb/dp-request-flow-mr branch from f16f2c4 to d1ad962 Compare January 10, 2026 00:40
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
@brb-nv brb-nv force-pushed the user/brb/dp-request-flow-mr branch from d1ad962 to e66b3c5 Compare January 10, 2026 00:41
@brb-nv
Copy link
Collaborator Author

brb-nv commented Jan 10, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31300 [ run ] triggered by Bot. Commit: e66b3c5

@brb-nv brb-nv requested a review from a team as a code owner January 10, 2026 01:59
@brb-nv brb-nv requested review from liji-nv and yuxianq January 10, 2026 01:59
@brb-nv brb-nv force-pushed the user/brb/dp-request-flow-mr branch from fa05159 to 8966bc3 Compare January 10, 2026 02:07
@brb-nv
Copy link
Collaborator Author

brb-nv commented Jan 10, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31310 [ run ] triggered by Bot. Commit: 8966bc3

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31310 [ run ] completed with state SUCCESS. Commit: 8966bc3
/LLM/main/L0_MergeRequest_PR pipeline #24204 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@brb-nv
Copy link
Collaborator Author

brb-nv commented Jan 11, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31375 [ run ] triggered by Bot. Commit: 8966bc3

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31375 [ run ] completed with state SUCCESS. Commit: 8966bc3
/LLM/main/L0_MergeRequest_PR pipeline #24264 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants