Skip to content

Conversation

@jthomson04
Copy link
Collaborator

@jthomson04 jthomson04 commented Dec 11, 2025

Summary by CodeRabbit

  • New Features

    • Enhanced context transfer status tracking with separate reporting of completed and failed requests, enabling improved diagnostics and operational visibility into transfer operations.
    • New asynchronous KV-cache transfer manager provides centralized coordination, improved reliability, and better state management for disaggregated context caching operations.
  • Refactor

    • Consolidated asynchronous transfer tracking mechanisms to replace ad-hoc per-request management patterns for consistency and maintainability.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@jthomson04
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27806 [ run ] triggered by Bot. Commit: 5053de8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27806 [ run ] completed with state SUCCESS. Commit: 5053de8
/LLM/main/L0_MergeRequest_PR pipeline #21222 completed with status: 'FAILURE'

@jthomson04 jthomson04 force-pushed the jthomson04/async-transfer-manager branch from 5053de8 to ef7f19a Compare December 11, 2025 23:10
@jthomson04
Copy link
Collaborator Author

/bot run

@jthomson04 jthomson04 force-pushed the jthomson04/async-transfer-manager branch from ef7f19a to a60fa96 Compare December 11, 2025 23:15
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27922 [ run ] triggered by Bot. Commit: a60fa96

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27922 [ run ] completed with state SUCCESS. Commit: a60fa96
/LLM/main/L0_MergeRequest_PR pipeline #21322 completed with status: 'FAILURE'

@jthomson04 jthomson04 force-pushed the jthomson04/async-transfer-manager branch from a60fa96 to 92497cd Compare December 12, 2025 17:20
@jthomson04
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28062 [ run ] triggered by Bot. Commit: 92497cd

@jthomson04 jthomson04 marked this pull request as ready for review December 12, 2025 19:58
@jthomson04 jthomson04 requested review from a team as code owners December 12, 2025 19:58
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 12, 2025

📝 Walkthrough

Walkthrough

The changes modify KV-cache context transfer status tracking across C++ and Python layers. The C++ cache transceiver now returns structured status (completed and error request sets) instead of void. A new helper in TrtGptModelInflightBatching processes these statuses. Python bindings adapt the return type by converting sets to tuples. The Python executor introduces AsyncTransferManager to centralize asynchronous transfer state management, replacing legacy per-request tracking.

Changes

Cohort / File(s) Summary
C++ Cache Transceiver Status Refactoring
cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h, cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
Added new RequestStatuses struct containing sets of completed and error request IDs. Changed checkContextTransferStatus() return type from void to RequestStatuses in both BaseCacheTransceiver and CacheTransceiver. Implementation now accumulates request outcomes and returns the aggregated status.
TrtGptModelInflightBatching Status Handler
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h, cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
Added new public method checkContextTransferStatus(RequestVector const&, std::optional<int> const&) that queries the CacheTransceiver and updates active request states to kDISAGG_CONTEXT_COMPLETE for requests found in the returned completedRequestIds set.
Python Bindings for Cache Transceiver
cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp, cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
Updated Python bindings to expose the new RequestStatuses return type as a tuple of (completedIds, errorIds) vectors. Added parameter naming (at_least_request_num) and GIL management in lambda wrappers.
Async Transfer Manager in PyExecutor
tensorrt_llm/_torch/pyexecutor/py_executor.py
Introduced AsyncTransferManager class to centralize KV-cache transfer state tracking, replacing legacy per-request counters and ctx_in_transmission_requests. Provides start_transfer(), end_transfer(), and requests_in_transfer() methods. Refactored _send_disagg_ctx_cache logic into _send_kv_async and integrated manager throughout transfer lifecycle (initialization, status checks, cleanup, termination).

Sequence Diagram

sequenceDiagram
    actor PyExecutor
    participant AsyncTransferManager
    participant CacheTransceiver
    participant TrtGptModelInflightBatching
    participant ResourceManager

    Note over PyExecutor,ResourceManager: KV-Cache Transfer Initialization
    PyExecutor->>AsyncTransferManager: start_transfer(request)
    AsyncTransferManager->>ResourceManager: pin blocks
    
    Note over PyExecutor,ResourceManager: Transfer In Progress
    PyExecutor->>CacheTransceiver: checkContextTransferStatus()
    CacheTransceiver-->>PyExecutor: RequestStatuses {completedIds, errorIds}
    
    PyExecutor->>TrtGptModelInflightBatching: checkContextTransferStatus(requests)
    TrtGptModelInflightBatching->>TrtGptModelInflightBatching: update request states<br/>for completed transfers
    
    Note over PyExecutor,ResourceManager: Transfer Completion
    PyExecutor->>AsyncTransferManager: end_transfer(request)
    AsyncTransferManager->>ResourceManager: unpin blocks
    AsyncTransferManager->>PyExecutor: transfer complete
    PyExecutor->>PyExecutor: _terminate_request(request)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • AsyncTransferManager lifecycle integration: Review how transfer state transitions (start→end) are wired through PyExecutor loop variants and interaction with existing ResourceManager; verify block pinning/unpinning logic is sound
  • Cross-layer status propagation: Trace RequestStatuses from C++ CacheTransceiver through bindings (set-to-tuple conversion) and into TrtGptModelInflightBatching state updates; ensure no race conditions
  • Legacy code removal/consolidation: Verify that removal of ctx_in_transmission_requests and _send_disagg_ctx_cache paths is complete and that all call sites now use AsyncTransferManager consistently
  • Binding parameter naming/defaults: Confirm at_least_request_num default (std::nullopt) is applied correctly in both nanobind and pybind wrappers

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning PR description is incomplete. Only a one-line summary and reference to a related PR are provided, with no substantive explanation of changes, design rationale, or test coverage. Fill in the Description section explaining what AsyncTransferManager does and why it's needed. Complete the Test Coverage section listing relevant tests. Ensure all checklist items are addressed with details.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: introduction of an AsyncTransferManager for handling asynchronous KV-cache transfers, which is well-reflected in the raw_summary.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

1297-1303: _kv_connector_terminate_requests() assumes every finished request is tracked → possible KeyError / missed termination.
If kv_connector_manager.get_finished() can yield requests that weren’t registered via start_transfer(), end_transfer() will currently blow up or no-op (after the fix above).

Recommendation: guard on membership and decide the fallback (terminate vs ignore). Example:

     def _kv_connector_terminate_requests(self):
         if self.kv_connector_manager:
             reqs_to_terminate = self.kv_connector_manager.get_finished()
             for req in reqs_to_terminate:
-                if self.async_transfer_manager.end_transfer(req):
+                if self.async_transfer_manager.end_transfer(req):
                     self._terminate_request(req)
+                else:
+                    # If the connector reports completion for an untracked request, decide on safe cleanup.
+                    # (At minimum, avoid silently leaking resources.)
+                    self._terminate_request(req)

1615-1631: _send_kv_async() return value regression in overlap loop (currently returns None).
_executor_loop_overlap stores ctx_transmission_reqs = self._send_kv_async(...) but _send_kv_async doesn’t return; downstream _process_previous_batch() still references previous_batch.ctx_transmission_reqs.

Suggestion: have _send_kv_async() return the list of requests for which a transceiver transfer was started (closest to prior semantics), and update the function accordingly:

 @nvtx_range("_send_kv_async")
 def _send_kv_async(self, scheduled_requests: List[LlmRequest]):
+    ctx_transmission_reqs: List[LlmRequest] = []
@@
     if self.kv_cache_transceiver:
         disagg_should_store_blocks = self._disagg_should_store_blocks()
         for req in scheduled_requests:
             if req.is_context_only_request and (
                     req.is_context_finished
                     or req.is_finished_due_to_length):
                 self.kv_cache_transceiver.respond_and_send_async(req)
+                ctx_transmission_reqs.append(req)
@@
     if self.kv_cache_transceiver:
         self._check_disagg_ctx_cache_transfer_status(0)
+    return ctx_transmission_reqs

Then _process_previous_batch() can either keep its current behavior or be simplified/removed if no longer needed.

Also applies to: 2086-2130, 1728-1732

cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h (1)

193-215: Public API change looks coherent; please also update the copyright year header.
Given this file was modified, the “current year” in the NVIDIA header should include 2025 (per repo guidelines).

Also applies to: 255-256

🧹 Nitpick comments (3)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h (1)

238-240: Add a brief Doxygen contract for checkContextTransferStatus (what it updates + error handling).
Right now it’s a public API with no stated guarantees (e.g., does it also handle error IDs, or only mark completed?).

cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (1)

419-528: RequestStatuses return is a good step; consider (a) deterministic ordering and (b) simpler wait logic.

  • If the Python layer compares lists or expects stable order, convert sets → sorted vector in the binding (or switch to vector in RequestStatuses).
  • The wait_for(0) + future.get() path when “no timeout” is selected is a bit indirect; consider branching: if (!senderFutureTimeoutMs) future.get(); else ....
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (1)

992-1004: LGTM! Helper method correctly centralizes context transfer status checking.

The implementation properly retrieves transfer status from the cache transceiver and updates request states based on completed request IDs. The logic is sound and the helper promotes code reuse.

Optional: Consider Allman brace style for consistency with coding guidelines.

The coding guidelines specify Allman indentation style for C++ (opening brace on new line). While the existing file uses K&R style, new code could align with the guideline:

-void TrtGptModelInflightBatching::checkContextTransferStatus(
-    RequestVector const& activeRequests, std::optional<int> const& atLeastRequestNum)
-{
+void TrtGptModelInflightBatching::checkContextTransferStatus(
+    RequestVector const& activeRequests, std::optional<int> const& atLeastRequestNum)
+{

Note: This is a minor style point, as the current code is consistent with the rest of the file.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 246a877 and 92497cd.

📒 Files selected for processing (7)
  • cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h (3 hunks)
  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (6 hunks)
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (3 hunks)
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h (1 hunks)
  • cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (2 hunks)
  • cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (14 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,h,cu}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,h,cu}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo)
Prefer const or constexpr variables over #define whenever possible, as the latter are not visible to the compiler
A variable that is not modified after its initialization should be declared as const
Except 0 (only used in comparison for checking signness/existence/emptiness) and nullptr, true, false, all other literals should only be used for variable initialization and should be replaced with named constants
Use Allman indentation style for braces in C++
Put the semicolon for an empty for or while loop in a new line
The statement forming the body of a switch, while, do .. while or for statement shall be a compound statement (use brace-delimited statements)
If and else should always be followed by brace-delimited statements, even if empty or a single statement
C++ filenames should use camel case with first letter lowercase (e.g., thisIsASubDir and thisIsAFilename.cpp)
All filenames involved in compilation of a compilation target must have case-insensitive unique filenames
All types (including class names) should use camel case with uppercase first letter (e.g., FooBarClass)
Local variables, methods and namespaces should use camel case with first letter lowercase (e.g., localFooBar)
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos)
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal)
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter of the name (e.g., static std::once_flag sFlag;)
Public, private and protected class member variables should use camel case prefixed with 'm' (e.g., mNbFooValues), though the 'm' pre...

Files:

  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h
  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
  • cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h
  • cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
**/*.h

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.h: Use a preprocessor guard in C++ header files with the guard name format TRTLLM_ followed by the filename in all caps (e.g., TRTLLM_FOO_BAR_HELLO_H for file FooBarHello.h); do not include directory names in the symbol
Do not use underscore prefix or suffix in C++ preprocessor guard symbols; they are reserved in C++ standard for compilers or implementation

Files:

  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h
  • cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h
  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
  • cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h
  • cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
🧠 Learnings (9)
📓 Common learnings
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
📚 Learning: 2025-08-21T09:41:49.347Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:2010-2045
Timestamp: 2025-08-21T09:41:49.347Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is specifically for updating bookkeeping when blocks are added during the context phase, not for refreshing offsets after detach operations. During detach operations, GenerationRequest::removeFrontBlock handles the necessary cache block bookkeeping internally.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h
  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
  • cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h
  • cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-20T06:48:45.368Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h:0-0
Timestamp: 2025-08-20T06:48:45.368Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, updateSequenceCacheBlockOffsets is only called when adding a sequence, not during detach operations. During detach, the cache block bookkeeping is handled by GenerationRequest::removeFrontBlock.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
  • cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h
  • cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-06T08:18:28.669Z
Learnt from: zhengd-nv
Repo: NVIDIA/TensorRT-LLM PR: 6633
File: cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp:145-155
Timestamp: 2025-08-06T08:18:28.669Z
Learning: In cpp/tensorrt_llm/batch_manager/dataTransceiverImpl.cpp, the existing `mMtxForMap` mutex in DataSenderImpl is sufficient to synchronize measurement file operations in the `release` method, as all file operations occur within the same critical section that protects the `mRequestToSession` map access.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
  • cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h
  • cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp
  • cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp
📚 Learning: 2025-09-02T13:43:22.657Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:728-731
Timestamp: 2025-09-02T13:43:22.657Z
Learning: The user pcastonguay prefers creating dedicated handler classes to encapsulate complex subsystem logic rather than spreading it across the main class. For disaggregated pipeline parallel termination, they suggest creating a `_disagg_pp_termination_handler` with a `cleanup()` method instead of manually waiting on termination handles during shutdown.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-12-12T03:27:08.565Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:08.565Z
Learning: In files under tensorrt_llm/_torch/pyexecutor, avoid accessing torch.Tensor objects inside for-loops when iterating over requests. Convert batched tensors to Python lists beforehand using tensor.tolist(), and then iterate over those lists. This improves performance by reducing tensor-bound operations inside hot loops. Apply this pattern to similar code paths that process batches to access simple Python data structures (lists) inside loops.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/py_executor.py
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp
📚 Learning: 2025-08-25T08:48:39.694Z
Learnt from: tshmilnvidia
Repo: NVIDIA/TensorRT-LLM PR: 5488
File: cpp/tensorrt_llm/executor/cache_transmission/nixl_utils/transferAgent.cpp:507-517
Timestamp: 2025-08-25T08:48:39.694Z
Learning: When reviewing potential compilation errors related to function parameter types, always check for type aliases using `using` declarations that might make seemingly incompatible types equivalent. RegisterDescs = MemoryDescs in TensorRT-LLM's transfer agent code.

Applied to files:

  • cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp
🧬 Code graph analysis (4)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.h (4)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (2)
  • checkContextTransferStatus (419-528)
  • checkContextTransferStatus (419-419)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (2)
  • checkContextTransferStatus (992-1004)
  • checkContextTransferStatus (992-993)
cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (4)
  • atLeastRequestNum (63-66)
  • atLeastRequestNum (63-63)
  • atLeastRequestNum (68-71)
  • atLeastRequestNum (68-68)
cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
  • atLeastRequestNum (61-65)
  • atLeastRequestNum (61-61)
  • atLeastRequestNum (67-70)
  • atLeastRequestNum (67-67)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (3)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (2)
  • checkContextTransferStatus (419-528)
  • checkContextTransferStatus (419-419)
cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (4)
  • atLeastRequestNum (63-66)
  • atLeastRequestNum (63-63)
  • atLeastRequestNum (68-71)
  • atLeastRequestNum (68-68)
cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
  • atLeastRequestNum (61-65)
  • atLeastRequestNum (61-61)
  • atLeastRequestNum (67-70)
  • atLeastRequestNum (67-67)
cpp/include/tensorrt_llm/batch_manager/cacheTransceiver.h (3)
cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
  • atLeastRequestNum (61-65)
  • atLeastRequestNum (61-61)
  • atLeastRequestNum (67-70)
  • atLeastRequestNum (67-67)
cpp/tensorrt_llm/batch_manager/cacheTransceiver.cpp (2)
  • checkContextTransferStatus (419-528)
  • checkContextTransferStatus (419-419)
cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (2)
  • checkContextTransferStatus (992-1004)
  • checkContextTransferStatus (992-993)
cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (1)
cpp/tensorrt_llm/pybind/batch_manager/cacheTransceiver.cpp (4)
  • atLeastRequestNum (61-65)
  • atLeastRequestNum (61-61)
  • atLeastRequestNum (67-70)
  • atLeastRequestNum (67-67)
🪛 Ruff (0.14.8)
tensorrt_llm/_torch/pyexecutor/py_executor.py

150-152: Avoid specifying long messages outside the exception class

(TRY003)


2095-2095: Do not catch blind exception: Exception

(BLE001)

🔇 Additional comments (5)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

2550-2551: Termination condition changed: ensure disagg requests don’t leak when block reuse is off.
The new guard if not request.is_disagg_context_transmission_state: is fine conceptually, but please sanity-check that every path that starts an async transfer also sets the expected “in transmission” state (esp. connector-only saves).

cpp/tensorrt_llm/batch_manager/trtGptModelInflightBatching.cpp (2)

933-936: LGTM! Context transfer status check properly integrated in sync flow.

The call is correctly guarded by a null check on mCacheTransceiver and uses atLeastRequestNum=0 to poll for any completed transfers without blocking, which is appropriate for the synchronous execution path.


1040-1044: LGTM! Blocking status check appropriately handles capacity constraints.

When the capacity scheduler cannot fit any requests, waiting for at least one context transfer to complete (atLeastRequestNum=1) is a sound strategy to free up KV cache resources. The call is properly guarded and the logic is correct.

cpp/tensorrt_llm/nanobind/batch_manager/cacheTransceiver.cpp (2)

63-66: LGTM! Trampoline class correctly updated for new return type.

The override properly reflects the updated signature of checkContextTransferStatus returning tb::RequestStatuses instead of void, and uses the appropriate NB_OVERRIDE_PURE macro.


91-107: LGTM! Python binding correctly adapted for new return type.

The lambda wrapper properly:

  • Releases the GIL during the potentially blocking call (essential for avoiding deadlocks)
  • Converts the C++ std::set<uint64_t> results to Python-compatible std::vector<int64_t>
  • Returns an idiomatic Python tuple of (completedRequestIds, errorRequestIds)
  • Uses snake_case parameter naming appropriate for Python APIs

The implementation follows nanobind best practices and is consistent with the pybind11 version.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28062 [ run ] completed with state SUCCESS. Commit: 92497cd
/LLM/main/L0_MergeRequest_PR pipeline #21435 completed with status: 'FAILURE'

@jthomson04
Copy link
Collaborator Author

/bot run

@jthomson04
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28141 [ run ] triggered by Bot. Commit: 4d11012

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28142 [ kill ] triggered by Bot. Commit: 4d11012

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28141 [ run ] completed with state ABORTED. Commit: 4d11012

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28142 [ kill ] completed with state SUCCESS. Commit: 4d11012
Successfully killed previous jobs for commit 4d11012

@jthomson04
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #28145 [ run ] triggered by Bot. Commit: 4d11012

Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
@jthomson04 jthomson04 force-pushed the jthomson04/async-transfer-manager branch from 7fd480f to 026d52f Compare January 3, 2026 21:07
@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30461 [ run ] triggered by Bot. Commit: 026d52f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30461 [ run ] completed with state SUCCESS. Commit: 026d52f
/LLM/main/L0_MergeRequest_PR pipeline #23483 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@jthomson04
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30629 [ run ] triggered by Bot. Commit: a8567dc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30629 [ run ] completed with state SUCCESS. Commit: a8567dc
/LLM/main/L0_MergeRequest_PR pipeline #23638 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@pcastonguay
Copy link
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30774 [ run ] triggered by Bot. Commit: a8567dc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30774 [ run ] completed with state SUCCESS. Commit: a8567dc
/LLM/main/L0_MergeRequest_PR pipeline #23757 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@pcastonguay
Copy link
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30908 [ run ] triggered by Bot. Commit: a8567dc

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30908 [ run ] completed with state SUCCESS. Commit: a8567dc
/LLM/main/L0_MergeRequest_PR pipeline #23871 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@eopXD
Copy link
Collaborator

eopXD commented Jan 8, 2026

/bot run --disable-fail-fast --reuse-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30992 [ run ] triggered by Bot. Commit: a8567dc

self.should_store_blocks = should_store_blocks

# Mapping of request id to the LlmRequest
self._requests: Dict[int, LlmRequest] = dict()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Why don't we just call it _requests_in_transfer?

request.state = LlmRequestState.DISAGG_CONTEXT_TRANS_IN_PROGRESS

if self.should_store_blocks:
block_id = self.kv_cache_manager.store_blocks_for_reuse(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A heads-up, this code will need to change since #10111 will return block_ids.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for letting me know. Will hold off on merging this until #10111 is merged.


def __init__(self, block_id: Optional[int]):
self.block_id = block_id
self.counter = 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial value of 1 on the creation of the this object can saves us a line to call start_trasnfer in the if-case below, but I feel this is too implicit. I think an initial value of 0 shall be more appropriate.


if self.should_store_blocks:
self.kv_cache_manager.unpin_blocks_by_id(
transfer_metadata.block_id)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If unpin_blocks_by_id makes refCount of all blocks in this request to 0, should should_terminate be True?

def _check_disagg_ctx_cache_transfer_status(self, atLeastNum: int = 0):
self.kv_cache_transceiver.check_context_transfer_status(atLeastNum)
finished_requests, error_requests = self.kv_cache_transceiver.check_context_transfer_status(
atLeastNum)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of not marking the requests as complete here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't mark requests as complete in check_context_transfer_status since a transfer with the KV connector might still be in progress.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30992 [ run ] completed with state SUCCESS. Commit: a8567dc
/LLM/main/L0_MergeRequest_PR pipeline #23947 completed with status: 'SUCCESS'

Signed-off-by: jthomson04 <[email protected]>
Signed-off-by: jthomson04 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants