-
Notifications
You must be signed in to change notification settings - Fork 2k
[None][chore] Async Transfer Manager #9891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[None][chore] Async Transfer Manager #9891
Conversation
|
/bot run |
|
PR_Github #27806 [ run ] triggered by Bot. Commit: |
|
PR_Github #27806 [ run ] completed with state |
5053de8 to
ef7f19a
Compare
|
/bot run |
ef7f19a to
a60fa96
Compare
|
PR_Github #27922 [ run ] triggered by Bot. Commit: |
|
PR_Github #27922 [ run ] completed with state |
a60fa96 to
92497cd
Compare
|
/bot run |
|
PR_Github #28062 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughThe changes modify KV-cache context transfer status tracking across C++ and Python layers. The C++ cache transceiver now returns structured status (completed and error request sets) instead of void. A new helper in TrtGptModelInflightBatching processes these statuses. Python bindings adapt the return type by converting sets to tuples. The Python executor introduces AsyncTransferManager to centralize asynchronous transfer state management, replacing legacy per-request tracking. Changes
Sequence DiagramsequenceDiagram
actor PyExecutor
participant AsyncTransferManager
participant CacheTransceiver
participant TrtGptModelInflightBatching
participant ResourceManager
Note over PyExecutor,ResourceManager: KV-Cache Transfer Initialization
PyExecutor->>AsyncTransferManager: start_transfer(request)
AsyncTransferManager->>ResourceManager: pin blocks
Note over PyExecutor,ResourceManager: Transfer In Progress
PyExecutor->>CacheTransceiver: checkContextTransferStatus()
CacheTransceiver-->>PyExecutor: RequestStatuses {completedIds, errorIds}
PyExecutor->>TrtGptModelInflightBatching: checkContextTransferStatus(requests)
TrtGptModelInflightBatching->>TrtGptModelInflightBatching: update request states<br/>for completed transfers
Note over PyExecutor,ResourceManager: Transfer Completion
PyExecutor->>AsyncTransferManager: end_transfer(request)
AsyncTransferManager->>ResourceManager: unpin blocks
AsyncTransferManager->>PyExecutor: transfer complete
PyExecutor->>PyExecutor: _terminate_request(request)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
1297-1303:_kv_connector_terminate_requests()assumes every finished request is tracked → possible KeyError / missed termination.
Ifkv_connector_manager.get_finished()can yield requests that weren’t registered viastart_transfer(),end_transfer()will currently blow up or no-op (after the fix above).Recommendation: guard on membership and decide the fallback (terminate vs ignore). Example:
def _kv_connector_terminate_requests(self): if self.kv_connector_manager: reqs_to_terminate = self.kv_connector_manager.get_finished() for req in reqs_to_terminate: - if self.async_transfer_manager.end_transfer(req): + if self.async_transfer_manager.end_transfer(req): self._terminate_request(req) + else: + # If the connector reports completion for an untracked request, decide on safe cleanup. + # (At minimum, avoid silently leaking resources.) + self._terminate_request(req)
1615-1631:_send_kv_async()return value regression in overlap loop (currently returns None).
_executor_loop_overlapstoresctx_transmission_reqs = self._send_kv_async(...)but_send_kv_asyncdoesn’t return; downstream_process_previous_batch()still referencesprevious_batch.ctx_transmission_reqs.Suggestion: have
_send_kv_async()return the list of requests for which a transceiver transfer was started (closest to prior semantics), and update the function accordingly:@nvtx_range("_send_kv_async") def _send_kv_async(self, scheduled_requests: List[LlmRequest]): + ctx_transmission_reqs: List[LlmRequest] = [] @@ if self.kv_cache_transceiver: disagg_should_store_blocks = self._disagg_should_store_blocks() for req in scheduled_requests: if req.is_context_only_request and ( req.is_context_finished or req.is_finished_due_to_length): self.kv_cache_transceiver.respond_and_send_async(req) + ctx_transmission_reqs.append(req) @@ if self.kv_cache_transceiver: self._check_disagg_ctx_cache_transfer_status(0) + return ctx_transmission_reqsThen
_process_previous_batch()can either keep its current behavior or be simplified/removed if no longer needed.Also applies to: 2086-2130, 1728-1732
cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h (1)
193-215: Public API change looks coherent; please also update the copyright year header.
Given this file was modified, the “current year” in the NVIDIA header should include 2025 (per repo guidelines).Also applies to: 255-256
🧹 Nitpick comments (3)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h (1)
238-240: Add a brief Doxygen contract forcheckContextTransferStatus(what it updates + error handling).
Right now it’s a public API with no stated guarantees (e.g., does it also handle error IDs, or only mark completed?).cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (1)
419-528:RequestStatusesreturn is a good step; consider (a) deterministic ordering and (b) simpler wait logic.
- If the Python layer compares lists or expects stable order, convert sets → sorted vector in the binding (or switch to vector in
RequestStatuses).- The
wait_for(0)+future.get()path when “no timeout” is selected is a bit indirect; consider branching:if (!senderFutureTimeoutMs) future.get(); else ....cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (1)
992-1004: LGTM! Helper method correctly centralizes context transfer status checking.The implementation properly retrieves transfer status from the cache transceiver and updates request states based on completed request IDs. The logic is sound and the helper promotes code reuse.
Optional: Consider Allman brace style for consistency with coding guidelines.
The coding guidelines specify Allman indentation style for C++ (opening brace on new line). While the existing file uses K&R style, new code could align with the guideline:
-void TrtGptModelInflightBatching::checkContextTransferStatus( - RequestVector const& activeRequests, std::optional<int> const& atLeastRequestNum) -{ +void TrtGptModelInflightBatching::checkContextTransferStatus( + RequestVector const& activeRequests, std::optional<int> const& atLeastRequestNum) +{Note: This is a minor style point, as the current code is consistent with the rest of the file.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h(3 hunks)cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp(6 hunks)cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp(3 hunks)cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h(1 hunks)cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp(2 hunks)cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp(2 hunks)tensorrt_llm/_torch/pyexecutor/py_executor.py(14 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,h,cu}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.{cpp,h,cu}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g.,} // namespace foo)
Preferconstorconstexprvariables over#definewhenever possible, as the latter are not visible to the compiler
A variable that is not modified after its initialization should be declared asconst
Except0(only used in comparison for checking signness/existence/emptiness) andnullptr,true,false, all other literals should only be used for variable initialization and should be replaced with named constants
Use Allman indentation style for braces in C++
Put the semicolon for an emptyfororwhileloop in a new line
The statement forming the body of aswitch,while,do .. whileorforstatement shall be a compound statement (use brace-delimited statements)
Ifandelseshould always be followed by brace-delimited statements, even if empty or a single statement
C++ filenames should use camel case with first letter lowercase (e.g.,thisIsASubDirandthisIsAFilename.cpp)
All filenames involved in compilation of a compilation target must have case-insensitive unique filenames
All types (including class names) should use camel case with uppercase first letter (e.g.,FooBarClass)
Local variables, methods and namespaces should use camel case with first letter lowercase (e.g.,localFooBar)
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by a lower case 'g' (e.g.,gDontUseGlobalFoos)
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by a lower case 's' (e.g.,sMutableStaticGlobal)
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter of the name (e.g.,static std::once_flag sFlag;)
Public, private and protected class member variables should use camel case prefixed with 'm' (e.g.,mNbFooValues), though the 'm' pre...
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.hcpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cppcpp/include/tensorrt_llm/batch_manager/cacheTransceiver.hcpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
**/*.h
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.h: Use a preprocessor guard in C++ header files with the guard name formatTRTLLM_followed by the filename in all caps (e.g.,TRTLLM_FOO_BAR_HELLO_Hfor fileFooBarHello.h); do not include directory names in the symbol
Do not use underscore prefix or suffix in C++ preprocessor guard symbols; they are reserved in C++ standard for compilers or implementation
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.hcpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h
**/*.{cpp,h,cu,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.hcpp/tensorrt_llm/batch_manager/cacheTransceiver.cpptensorrt_llm/_torch/pyexecutor/py_executor.pycpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cppcpp/include/tensorrt_llm/batch_manager/cacheTransceiver.hcpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., usefrom package.subpackage import fooand thenfoo.SomeClass()instead offrom package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g.,some_file.py)
Python class names should use PascalCase (e.g.,class SomeClass)
Python function and method names should use snake_case (e.g.,def my_awesome_function():)
Python local variable names should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile = ...)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g.,MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g.,self.x = 5followed by"""<type>: Description of 'x'""")
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic
Files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
🧠 Learnings (9)
📓 Common learnings
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.hcpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cppcpp/include/tensorrt_llm/batch_manager/cacheTransceiver.hcpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cppcpp/include/tensorrt_llm/batch_manager/cacheTransceiver.hcpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
Repo: NVIDIA/TensorRT-LLM PR: 6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cppcpp/include/tensorrt_llm/batch_manager/cacheTransceiver.hcpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.
Applied to files:
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cppcpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-09-02T13:43:22.657Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:728-731
Timestamp: 2025-09-02T13:43:22.657Z
Learning: The user pcastonguay prefers creating dedicated handler classes to encapsulate complex subsystem logic rather than spreading it across the main class. For disaggregated pipeline parallel termination, they suggest creating a `_disagg_pp_termination_handler` with a `cleanup()` method instead of manually waiting on termination handles during shutdown.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-12-12T03:27:08.565Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:08.565Z
Learning: In files under tensorrt_llm/_torch/pyexecutor, avoid accessing torch.Tensor objects inside for-loops when iterating over requests. Convert batched tensors to Python lists beforehand using tensor.tolist(), and then iterate over those lists. This improves performance by reducing tensor-bound operations inside hot loops. Apply this pattern to similar code paths that process batches to access simple Python data structures (lists) inside loops.
Applied to files:
tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Applied to files:
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-25T08:48:39.694Z
Learnt from: tshmilnvidia
Repo: NVIDIA/TensorRT-LLM PR: 5488
File: cpp/tensorrt_llm/executor/cache_transmission/nixl_utils/transferAgent.cpp:507-517
Timestamp: 2025-08-25T08:48:39.694Z
Learning: When reviewing potential compilation errors related to function parameter types, always check for type aliases using `using` declarations that might make seemingly incompatible types equivalent. RegisterDescs = MemoryDescs in TensorRT-LLM's transfer agent code.
Applied to files:
cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
🧬 Code graph analysis (4)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h (4)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (2)
checkContextTransferStatus(419-528)checkContextTransferStatus(419-419)cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (2)
checkContextTransferStatus(992-1004)checkContextTransferStatus(992-993)cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (4)
atLeastRequestNum(63-66)atLeastRequestNum(63-63)atLeastRequestNum(68-71)atLeastRequestNum(68-68)cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
atLeastRequestNum(61-65)atLeastRequestNum(61-61)atLeastRequestNum(67-70)atLeastRequestNum(67-67)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (3)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (2)
checkContextTransferStatus(419-528)checkContextTransferStatus(419-419)cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (4)
atLeastRequestNum(63-66)atLeastRequestNum(63-63)atLeastRequestNum(68-71)atLeastRequestNum(68-68)cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
atLeastRequestNum(61-65)atLeastRequestNum(61-61)atLeastRequestNum(67-70)atLeastRequestNum(67-67)
cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h (3)
cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
atLeastRequestNum(61-65)atLeastRequestNum(61-61)atLeastRequestNum(67-70)atLeastRequestNum(67-67)cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (2)
checkContextTransferStatus(419-528)checkContextTransferStatus(419-419)cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (2)
checkContextTransferStatus(992-1004)checkContextTransferStatus(992-993)
cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (1)
cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
atLeastRequestNum(61-65)atLeastRequestNum(61-61)atLeastRequestNum(67-70)atLeastRequestNum(67-67)
🪛 Ruff (0.14.8)
tensorrt_llm/_torch/pyexecutor/py_executor.py
150-152: Avoid specifying long messages outside the exception class
(TRY003)
2095-2095: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (5)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
2550-2551: Termination condition changed: ensure disagg requests don’t leak when block reuse is off.
The new guardif not request.is_disagg_context_transmission_state:is fine conceptually, but please sanity-check that every path that starts an async transfer also sets the expected “in transmission” state (esp. connector-only saves).cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (2)
933-936: LGTM! Context transfer status check properly integrated in sync flow.The call is correctly guarded by a null check on
mCacheTransceiverand usesatLeastRequestNum=0to poll for any completed transfers without blocking, which is appropriate for the synchronous execution path.
1040-1044: LGTM! Blocking status check appropriately handles capacity constraints.When the capacity scheduler cannot fit any requests, waiting for at least one context transfer to complete (
atLeastRequestNum=1) is a sound strategy to free up KV cache resources. The call is properly guarded and the logic is correct.cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (2)
63-66: LGTM! Trampoline class correctly updated for new return type.The override properly reflects the updated signature of
checkContextTransferStatusreturningtb::RequestStatusesinstead of void, and uses the appropriateNB_OVERRIDE_PUREmacro.
91-107: LGTM! Python binding correctly adapted for new return type.The lambda wrapper properly:
- Releases the GIL during the potentially blocking call (essential for avoiding deadlocks)
- Converts the C++
std::set<uint64_t>results to Python-compatiblestd::vector<int64_t>- Returns an idiomatic Python tuple of (completedRequestIds, errorRequestIds)
- Uses snake_case parameter naming appropriate for Python APIs
The implementation follows nanobind best practices and is consistent with the pybind11 version.
|
PR_Github #28062 [ run ] completed with state |
|
/bot run |
|
/bot kill |
|
PR_Github #28141 [ run ] triggered by Bot. Commit: |
|
PR_Github #28142 [ kill ] triggered by Bot. Commit: |
|
PR_Github #28141 [ run ] completed with state |
|
PR_Github #28142 [ kill ] completed with state |
|
/bot run |
|
PR_Github #28145 [ run ] triggered by Bot. Commit: |
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
7fd480f to
026d52f
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #30461 [ run ] triggered by Bot. Commit: |
|
PR_Github #30461 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #30629 [ run ] triggered by Bot. Commit: |
|
PR_Github #30629 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #30774 [ run ] triggered by Bot. Commit: |
|
PR_Github #30774 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #30908 [ run ] triggered by Bot. Commit: |
|
PR_Github #30908 [ run ] completed with state
|
|
/bot run --disable-fail-fast --reuse-test |
|
PR_Github #30992 [ run ] triggered by Bot. Commit: |
| self.should_store_blocks = should_store_blocks | ||
|
|
||
| # Mapping of request id to the LlmRequest | ||
| self._requests: Dict[int, LlmRequest] = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Why don't we just call it _requests_in_transfer?
| request.state = LlmRequestState.DISAGG_CONTEXT_TRANS_IN_PROGRESS | ||
|
|
||
| if self.should_store_blocks: | ||
| block_id = self.kv_cache_manager.store_blocks_for_reuse( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A heads-up, this code will need to change since #10111 will return block_ids.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for letting me know. Will hold off on merging this until #10111 is merged.
|
|
||
| def __init__(self, block_id: Optional[int]): | ||
| self.block_id = block_id | ||
| self.counter = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial value of 1 on the creation of the this object can saves us a line to call start_trasnfer in the if-case below, but I feel this is too implicit. I think an initial value of 0 shall be more appropriate.
|
|
||
| if self.should_store_blocks: | ||
| self.kv_cache_manager.unpin_blocks_by_id( | ||
| transfer_metadata.block_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If unpin_blocks_by_id makes refCount of all blocks in this request to 0, should should_terminate be True?
| def _check_disagg_ctx_cache_transfer_status(self, atLeastNum: int = 0): | ||
| self.kv_cache_transceiver.check_context_transfer_status(atLeastNum) | ||
| finished_requests, error_requests = self.kv_cache_transceiver.check_context_transfer_status( | ||
| atLeastNum) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the purpose of not marking the requests as complete here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't mark requests as complete in check_context_transfer_status since a transfer with the KV connector might still be in progress.
|
PR_Github #30992 [ run ] completed with state |
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Summary by CodeRabbit
New Features
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.