-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-10264][feat] Support attention DP + Helix CP #10477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
/bot run --disable-fail-fast |
📝 WalkthroughWalkthroughThis PR introduces context parallelism (CP) support across the system by reworking rank calculations for data parallelism, tensor parallelism, and context parallelism in tensor-parallel setups. Changes span backend cache management, attention mechanisms, execution logic, and test infrastructure to enable CP-aware synchronization and reduction operations. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant PyExecutor as PyExecutor<br/>(CP-aware)
participant CacheTransceiver as CacheTransceiver<br/>(Rank Calc)
participant Attention as Attention Module<br/>(CP Mapping)
participant CommLib as Communication<br/>(AllReduce/Broadcast)
rect rgb(200, 220, 255)
Note over Client,CommLib: Initialization Phase: CP-aware Rank Calculation
Client->>PyExecutor: Create executor with cp_size > 1
PyExecutor->>CacheTransceiver: Compute DPRank = tensorRank / TPSizeInDPGroup
CacheTransceiver->>CacheTransceiver: Calculate TargetRanksInfo with mDomainCPSize
CacheTransceiver-->>PyExecutor: Return CP-aware rank metadata
PyExecutor->>Attention: Initialize with CP mapping (fold CP into TP)
Attention->>Attention: Compute mapping_o: world_size = original_tp_size*pp_size*cp_size
Attention-->>PyExecutor: Setup complete
end
rect rgb(220, 240, 220)
Note over Client,CommLib: Forward Pass: CP Synchronization
Client->>PyExecutor: Process batch (multi-CP setup)
PyExecutor->>PyExecutor: Execute forward with attention DP
alt enable_attention_dp && cp_size > 1
PyExecutor->>Attention: needs_cp_reduce = true
Attention->>CommLib: Perform CP-group reduction (not TP-wide)
CommLib-->>Attention: Reduction complete within CP
else enable_attention_dp only
PyExecutor->>Attention: needs_tp_reduce = true
Attention->>CommLib: Perform TP reduction
CommLib-->>Attention: TP reduction complete
end
end
rect rgb(240, 220, 220)
Note over Client,CommLib: Sampling Phase: Token Synchronization
PyExecutor->>PyExecutor: Sample tokens at cp_rank 0
PyExecutor->>PyExecutor: _sync_sampled_tokens_across_cp called
PyExecutor->>CommLib: cp_broadcast(new_tokens, src=0)
CommLib-->>PyExecutor: All CP ranks receive same tokens
PyExecutor->>PyExecutor: _enqueue_responses with dedup for tp_gather
PyExecutor-->>Client: Return synchronized responses
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
2539-2563: Add NVIDIA copyright header and document the tp_gather stride ordering for the CP deduplication logic.The CP‑aware response gathering logic is correct:
tp_groupinmapping.pyconfirms ranks are interleaved at stridecp_size, soresponses_list[::self.dist.cp_size]correctly selects one CP replica per TP rank.However,
py_executor.pyis missing the required NVIDIA copyright header. Per the coding guidelines, all source files must include a copyright header with the year of latest meaningful modification.Additionally, document the ordering invariant explicitly. The current inline comment at line 2557 should be expanded to clarify: "tp_gather returns data ordered by tp_rank, with CP replicas strided by cp_size (determined by mapping.py's tp_group construction), so deduplication by stride selects exactly one replica per TP rank."
🤖 Fix all issues with AI agents
In @tensorrt_llm/_torch/pyexecutor/executor_request_queue.py:
- Around line 370-384: The CP-aggregation loop assumes responses_list has
self.dist.tp_size * self.dist.cp_size entries but tp_allgather only returns
tp_size entries; add a defensive check before the aggregation loop in
executor_request_queue.py: compute expected = self.dist.tp_size *
self.dist.cp_size and if len(responses_list) < expected, either raise a clear
ValueError (including expected vs actual lengths) or fall back to aggregating
using the available entries by grouping with bounds-safe slicing; ensure you
reference self.dist.cp_size, self.dist.tp_size and responses_list so the code
avoids out-of-bounds indexing when building aggregated_responses.
🧹 Nitpick comments (4)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (2)
1200-1209: Newdisable_attn_allreducecondition correctly keeps all‑reduce on for DP+CP, but still depends on TP size.The updated logic:
has_cp = mapping_with_cp is not None and mapping_with_cp.cp_size > 1 can_skip_for_attention_dp = self.enable_attention_dp and not has_cp self.disable_attn_allreduce = ( self.fusion_config.PRE_MOE_FUSION or self.fusion_config.PRE_MLP_FUSION or self.mapping.tp_size == 1 or can_skip_for_attention_dp )nicely fixes the earlier issue where
enable_attention_dpunconditionally disabled attention all‑reduce — now DP+CP keepsenable_allreduce=Trueso CP reductions can still happen.One subtle point: even in DP+CP cases,
self.mapping.tp_sizehere is the repurposed TP size (afterrepurpose_helix_cp_to_tp). If that ever becomes 1 (e.g., helix CP but TP disabled),disable_attn_allreduceflips toTrue, and you would rely entirely on the CP‑side reduction inside attention. That’s fine if MLA’s CP path never looks atAllReduceParams.enable_allreduce, but if you later gate CP reductions on that same flag, TP==1 would unexpectedly disable CP as well.Consider either:
- documenting that
AllReduceParams.enable_allreducecontrols TP‑space reduction only and CP reduction is independent, or- tightening the condition to something like
or (self.mapping.tp_size == 1 and not has_cp)if you ever reuse the same flag for CP.Right now, with the current MLA/AllReduce wiring, the behavior for the targeted DeepSeek V3 Lite helix+ADP configs is correct; this is mainly a guardrail against future refactors.
1123-1135: The CP/DP-awarereduce_outputlogic is correct; add a docstring note onmapping_with_cprequirement.The split into:
needs_tp_reduce = not self.enable_attention_dp and self.mapping.tp_size > 1 needs_cp_reduce = mapping_with_cp is not None and mapping_with_cp.cp_size > 1 ... reduce_output=needs_tp_reduce or needs_cp_reducecorrectly handles all configurations:
- pure TP (no ADP, no CP) → TP reduce only
- ADP without CP → no reduce
- any CP>1 (with or without ADP) → CP-group reduce
This relies on
mapping_with_cpbeing the original mapping (before helix CP→TP repurposing), which is properly set up inDeepseekV3ForCausalLM.__init__(line 1682) and passed to layers (line 1703). The None case is safely handled.However, the API is somewhat implicit:
DeepseekV3DecoderLayeracceptsmapping_with_cp: Optional[Mapping] = None, but its correctness for CP-enabled configs depends on this parameter being provided. Consider adding a docstring note in the constructor documenting that CP support requires passing the original (pre-repurposed) mapping, to prevent future confusion if the class is used outside the canonicalDeepseekV3ForCausalLMpath.tests/integration/defs/accuracy/test_disaggregated_serving.py (1)
894-940: ADP toggle in Helix test is wired correctly; be aware of log volume fromprint_iter_log=True.The added parameterization:
@pytest.mark.parametrize("enable_attention_dp", [False, True], ids=["adp_off", "adp_on"]) def test_auto_dtype_with_helix(self, enable_attention_dp, comms_medium, cuda_graph_config, gen_pp, gen_tp, gen_cp): ... gen_server_config = { ... "enable_attention_dp": enable_attention_dp, "print_iter_log": True, }cleanly exercises both ADP on/off for the DeepSeek V3 Lite helix+CP configurations, and the signature matches the decorator order.
Enabling
print_iter_logon both ctx and gen servers is useful for debugging these distributed setups, but it can significantly increase log volume in CI. If logs become unwieldy, consider either:
- gating
print_iter_logbehind an environment variable, or- leaving it on only for failing parametrizations via a lighter‑weight logging setting.
From a correctness standpoint, the new test plumbing looks good and should provide the coverage you want for DP×CP interactions.
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
2382-2423: Thecp_broadcastintegration and CPU↔CUDA handling look correct.The
MPIDistassertion,cp_local_root = 0, and the tensor Device↔CPU shuttling are all sound. The_update_requestsguard (cp_size > 1 and sample_state.host is not None) properly avoids unnecessary work in non-CP or pre-host states.The implementation correctly broadcasts
new_tokensandfinish_reasons(with defensivehasattr()check) from cp_rank 0 to other CP ranks. Thecp_broadcast()method usestorch.distributed.broadcast(), which modifies tensors in-place on all ranks—passing the actual tensor on each rank is the intended usage pattern, notNoneon non-roots.One minor consideration: the code currently syncs only the direct outputs of sampling (
new_tokensandfinish_reasons). If otherSampleState.hostfields such assequence_lengths,log_probs, orcum_log_probsare also produced on cp_rank 0 and needed consistently across CP ranks for subsequent scheduling decisions, you may want to include them here as well. This depends on whether those fields represent rank-local state or shared sampling metadata.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpptensorrt_llm/_torch/models/modeling_deepseekv3.pytensorrt_llm/_torch/modules/attention.pytensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/pyexecutor/py_executor.pytests/integration/defs/accuracy/test_disaggregated_serving.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g.,} // namespace foo)
Preferconstorconstexprvariables over#defineswhenever possible
A variable that is not modified after its initialization should be declared asconst
For naming of constants in C++, use uppercase snakecase with prefix 'k' (e.g.,kDIGIT_NUM)
Except for0,nullptr,true, andfalse, all other literals should only be used for variable initialization and not in comparisons or expressions
Use Allman indentation style for brace notation in C++ code
Put the semicolon for an emptyfororwhileloop in a new line
The statement forming the body of aswitch,while,do..while, orforstatement must be a compound statement (use brace-delimited statements)
Ifandelsestatements should always be followed by brace-delimited statements, even if empty or a single statement
C++ filenames should use camelCase with first letter lowercase (e.g.,thisIsAFilename.cpp)
All types (including class names) in C++ should use PascalCase with uppercase first letter (e.g.,FooBarClass)
Local variables, methods, and namespaces in C++ should use camelCase with first letter lowercase (e.g.,localFooBar)
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camelCase prefixed with 'g' (e.g.,gDontUseGlobalFoos)
Non-magic-number global variables that are static or defined in an anonymous namespace should use camelCase prefixed with 's' (e.g.,sMutableStaticGlobal)
Locally visible static variables should use camelCase with 's' as the first letter (e.g.,static std::once_flag sFlag;)
Public, private, and protected class member variables should use camelCase prefixed with 'm' (e.g.,mNbFooValues)
Do not use Hungarian notation in C++ except for 'apps hungarian' (e.g., 'nb' to indicate count:mNbLayers)
If a constructor parameter name conflicts with a public me...
Files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.{cpp,cc,cxx,cu}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cpp,cc,cxx,cu}: Use smart pointers for allocating objects on the heap in C++
Preferunique_ptrfor single resource ownership andshared_ptrfor shared resource ownership in C++. Useweak_ptronly in exceptional cases
In C++ function calls where parameters are not obvious, use inline C comments to document the parameter (e.g.,doSomeOperation(/* checkForErrors = */ false);)
Use the least forceful cast necessary in C++, or no cast if possible
Casting a pointer tovoid*in C++ should be implicit (except if removingconst)
Casting in C++ should not remove anyconstorvolatilequalification from the type of a pointer or reference
Do not use C-style casts (other than void casts) and functional notation casts (other than explicit constructor calls) in C++
Casting fromvoid*toT*in C++ should be done withstatic_cast, notreinterpret_cast
Usereinterpret_castin C++ as a last resort, whereconst_castandstatic_castwon't work
Avoiddynamic_castin C++
Do not use assignment operator in C++ subexpressions (e.g.,x = y = zorif (x = y))
When practical, a C++switchstatement controlled by anenumshould have a case for each enum value and not have a default clause
C++ switch statements should be well structured as structured multi-way branches, not as 'glorified gotos'
In C++ switch statements, prohibit fall-through except from one case label to another. Each case clause must be terminated with a break or throw
Do not end a C++ case clause with return; use break or throw instead
If a C++ switch clause is a compound statement, put the break inside the braces
Do not use C library functions in C++ whenever possible. Use C++ alternatives like brace initialization orstd::fill_n()instead ofmemset()
Files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.{h,hpp,hxx,cpp,cc,cxx,cu,cuh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All C++ class templates, function templates, class template member functions, and class template static members must be instantiated at least once
Files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification
Files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpptensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/models/modeling_deepseekv3.pytensorrt_llm/_torch/modules/attention.pytensorrt_llm/_torch/pyexecutor/py_executor.pytests/integration/defs/accuracy/test_disaggregated_serving.pycpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g.,some_file.py)
Python classes should use PascalCase (e.g.,class SomeClass)
Python functions and methods should use snake_case (e.g.,def my_awesome_function():)
Python local variables should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL)
Python constants should use upper snake_case (e.g.,MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format"""<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic
Files:
tensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/models/modeling_deepseekv3.pytensorrt_llm/_torch/modules/attention.pytensorrt_llm/_torch/pyexecutor/py_executor.pytests/integration/defs/accuracy/test_disaggregated_serving.py
🧠 Learnings (16)
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
Repo: NVIDIA/TensorRT-LLM PR: 6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpptensorrt_llm/_torch/pyexecutor/py_executor.pycpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
📚 Learning: 2025-09-02T13:42:44.885Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpptensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/models/modeling_deepseekv3.pycpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cucpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-09-24T03:31:28.908Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-12-12T03:27:18.859Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:18.859Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, when reviewing code that iterates through requests, ensure it does not convert excessive data into Python lists. Instead, the code should use torch.gather or indexing to gather only the data that will be used in the for loop before converting to Python lists. This minimizes data movement and improves performance.
Applied to files:
tensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-12-12T03:27:08.565Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:08.565Z
Learning: In files under tensorrt_llm/_torch/pyexecutor, avoid accessing torch.Tensor objects inside for-loops when iterating over requests. Convert batched tensors to Python lists beforehand using tensor.tolist(), and then iterate over those lists. This improves performance by reducing tensor-bound operations inside hot loops. Apply this pattern to similar code paths that process batches to access simple Python data structures (lists) inside loops.
Applied to files:
tensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/pyexecutor/executor_request_queue.pytensorrt_llm/_torch/models/modeling_deepseekv3.py
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
Applied to files:
tensorrt_llm/_torch/models/modeling_deepseekv3.py
📚 Learning: 2025-09-29T15:14:28.503Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 8063
File: tensorrt_llm/lora_manager.py:1080-1112
Timestamp: 2025-09-29T15:14:28.503Z
Learning: In tensorrt_llm/lora_manager.py, when calculating part_sizes for attn_qkv fused LoRA modules, the sizes are correctly multiplied by tp_size because model_config.num_heads and model_config.num_kv_heads are already divided by tp_size (per-TP-rank values), so multiplication is needed to get the original full concatenated dimension size. The interleave_fused_lora_weights_for_tp function provides proper validation with asserts for total size and TP divisibility.
Applied to files:
tensorrt_llm/_torch/models/modeling_deepseekv3.pytensorrt_llm/_torch/modules/attention.py
📚 Learning: 2025-09-29T15:14:28.503Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 8063
File: tensorrt_llm/lora_manager.py:1080-1112
Timestamp: 2025-09-29T15:14:28.503Z
Learning: In tensorrt_llm/lora_manager.py, when calculating part_sizes for attn_qkv fused LoRA modules, the sizes are correctly multiplied by tp_size because model_config.num_heads and model_config.num_kv_heads are already divided by tp_size (per-TP-rank values), so multiplication is needed to get the original full concatenated dimension size. The interleave_fused_lora_weights_for_tp function provides proper validation.
Applied to files:
tensorrt_llm/_torch/models/modeling_deepseekv3.pytensorrt_llm/_torch/modules/attention.py
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.
Applied to files:
tensorrt_llm/_torch/modules/attention.py
📚 Learning: 2025-10-13T19:45:03.518Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: tests/unittest/_torch/multi_gpu/test_nccl_device.py:138-149
Timestamp: 2025-10-13T19:45:03.518Z
Learning: In test_nccl_device.py, the NCCL device AllReduce implementation compares the entire residual tensor on each rank, unlike the UB implementation which compares per-rank chunks. The residual chunking calculations in the test are intentionally overridden to reflect this design difference.
Applied to files:
cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp
🧬 Code graph analysis (5)
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)
tensorrt_llm/_torch/distributed/communicator.py (4)
tp_allgather(438-439)tp_allgather(690-703)cp_size(56-57)tp_size(64-65)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (2)
tensorrt_llm/_torch/distributed/communicator.py (2)
tp_size(64-65)cp_size(56-57)tensorrt_llm/mapping.py (1)
has_cp(291-292)
tensorrt_llm/_torch/modules/attention.py (2)
tensorrt_llm/mapping.py (3)
Mapping(361-540)rank(199-200)rank(203-210)tensorrt_llm/llmapi/llm_args.py (2)
world_size(557-558)world_size(567-571)
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
SampleState(112-118)tensorrt_llm/_torch/distributed/communicator.py (4)
cp_broadcast(124-125)cp_broadcast(430-436)cp_broadcast(748-759)cp_size(56-57)tensorrt_llm/mapping.py (1)
tp_group(563-564)
cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp (1)
cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu (2)
TargetRanksInfoForDP(97-257)TargetRanksInfoForDP(97-98)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Check PR Checklist Resolution
- GitHub Check: Check PR Checklist Resolution
- GitHub Check: Pre-commit Check
🔇 Additional comments (9)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (1)
155-165: LGTM — DPRank calculation simplified correctly.The refactored calculation
tensorRank / TPSizeInDPGroupis mathematically equivalent to the previous two-step approach (since integer division floors the result). The updated comments clearly document the CP-aware rank layout assumption, improving maintainability.tensorrt_llm/_torch/modules/attention.py (1)
935-970: Well-documented mapping adjustment for o_proj with CP-aware DP groups.The dual-branch approach correctly handles:
- Attention DP + CP: Each DP group gets an independent all-reduce among its CP ranks by using
pp_size * original_tp_sizeto separate groups whiletp_size=cp_sizeenables the CP-only reduction.- Non-DP or CP=1: Standard behavior folding CP into TP for a unified all-reduce.
The inline comments explaining the topology and concrete example are helpful for maintainability.
cpp/tensorrt_llm/executor/cache_transmission/cacheSplitConcat.cu (1)
234-242: LGTM! Enhanced logging for CP-aware domain calculations.The expanded debug logging provides better visibility into the CP-domain rank calculations, which is valuable for debugging CP+DP scenarios.
cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp (6)
555-558: LGTM! CP-aware rank decomposition aligns with TargetRanksInfoForDP.The rank formulas correctly decompose
mRankInInstanceusing the same ordering as the rank construction formula inTargetRanksInfoForDP(line 215 in cacheSplitConcat.cu):ppRank * (tpNum * cpNum) + tpRank * cpNum + cpRank.
888-900: LGTM! CP-aware request construction for DP tests.The CP metadata computation and adjusted sequence length correctly handle block distribution across CP ranks, matching the pattern in
makeLlmRequest(lines 868-874).
1440-1459: LGTM! Test filtering ensures minimum block distribution for CP.The filtering logic ensures each CP rank receives at least one block by requiring
len > tokensPerBlock * (genCp - 1), which guarantees at leastgenCpblocks total for distribution across CP ranks.
1847-1884: LGTM! Comprehensive CP+DP test coverage for MLA.The new test instantiations cover critical scenarios:
AsymmetricCaseTestWithCPAndDPForMLA0: Context TP/PP with Generation CP+DPAsymmetricCaseTestWithCPAndDPForMLA1: Context DP with Generation CP+DPThese test cases align with the PR's objective to support data parallelism alongside context parallelism.
2297-2315: LGTM! CP-aware DP rank calculation and typo fix.The updated DP rank formulas correctly account for context parallelism:
(rank % (tpNum * cpNum)) / cpNumproperly extracts the TP rank within a PP group when CP is present.- Typo fix:
contextTragetInfo→contextTargetInfoBoth changes align with the CP-aware rank decomposition logic.
2401-2422: LGTM! Typo fix and CP-aware updates in verifyGeneration.Changes mirror the
verifyContextfunction updates:
- Typo fix:
verfiyGeneration→verifyGeneration- CP-aware DP rank calculations applied consistently
- Updated assertions use
contextTargetInfowith correct field namesAll changes align with the CP-aware rank decomposition pattern.
|
PR_Github #30827 [ run ] triggered by Bot. Commit: |
|
PR_Github #30827 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #30919 [ run ] triggered by Bot. Commit: |
|
PR_Github #30919 [ run ] completed with state
|
7e10297 to
1fd567f
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #30952 [ run ] triggered by Bot. Commit: |
60400b4 to
882c67a
Compare
|
PR_Github #30952 [ run ] completed with state
|
Tabrizian
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disagg changes LGTM.
3f6bff0 to
98ec37a
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #31184 [ run ] triggered by Bot. Commit: |
| // DPRank is derived from the tensor parallel rank, which already accounts for CP. | ||
| // Layout: rank = ppRank * (TP * CP) + tpRank * CP + cpRank. | ||
| // getTensorParallelRank() correctly extracts tpRank regardless of CP. | ||
| int DPRank = worldConfig.getTensorParallelRank() / TPSizeInDPGroup; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we directly use int DPRank = mCacheState->getParallelConfig().mDPrank? Also cc the author of original code @chuangz0
|
PR_Github #31184 [ run ] completed with state
|
f16f2c4 to
d1ad962
Compare
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
d1ad962 to
e66b3c5
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #31300 [ run ] triggered by Bot. Commit: |
Signed-off-by: Balaram Buddharaju <[email protected]>
fa05159 to
8966bc3
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #31310 [ run ] triggered by Bot. Commit: |
|
PR_Github #31310 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #31375 [ run ] triggered by Bot. Commit: |
|
PR_Github #31375 [ run ] completed with state
|
Description
This MR makes changes to support DP alongside helix parallelism in TRTLLM.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.