[https://nvbugs/5644632][fix] Use correct memory pool#9196
[https://nvbugs/5644632][fix] Use correct memory pool#9196HuiGao-NV wants to merge 4 commits intoNVIDIA:mainfrom
Conversation
📝 WalkthroughWalkthroughThis PR refactors CUDA memory pool handling across the TensorRT-LLM PyTorch integration by introducing explicit Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–25 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tensorrt_llm/_torch/memory_buffer_utils.py (1)
60-81: Consider adding a comment to clarify the reserved block preference logic.The conditional at lines 72-73 correctly ensures that a reserved block is not superseded by an unreserved one during best-fit search. However, the nested logic could be clearer for future maintainers.
Consider adding a comment:
# Find the smallest buffer that is still large enough (best-fit). if block.buffer.numel() < smallest_sufficient_size: + # Prefer reserved blocks: don't replace a reserved best-fit with an unreserved candidate if best_fit_block is not None and best_fit_block.is_reserved and not block.is_reserved: continue
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
tensorrt_llm/_torch/compilation/backend.py(2 hunks)tensorrt_llm/_torch/memory_buffer_utils.py(1 hunks)tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py(4 hunks)tensorrt_llm/_torch/pyexecutor/model_engine.py(1 hunks)tests/integration/test_lists/waives.txt(0 hunks)
💤 Files with no reviewable changes (1)
- tests/integration/test_lists/waives.txt
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.pytensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/_torch/pyexecutor/model_engine.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/memory_buffer_utils.py (1)
tensorrt_llm/_utils.py (1)
numel(1009-1010)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (6)
tensorrt_llm/_torch/pyexecutor/cuda_graph_runner.py (4)
64-64: LGTM: Type annotation improved.Changing the type from
Anytotorch.cuda.MemPoolmakes the expected type explicit and improves type safety.
298-298: LGTM: Correct use of memory pool handle.The graph capture correctly uses
memory_pool_handlefor the pool parameter, aligning with the new handle-based approach.
431-432: LGTM: Proper cleanup of memory pool handle.The cleanup correctly deletes and nullifies the
memory_pool_handle, ensuring resources are properly released.
101-103: Fix syntax error: stray closing parenthesis.Line 102 contains a stray closing parenthesis that will cause a syntax error. Line 101 is a complete statement and doesn't require this.
Apply this diff:
self.memory_pool = config.cuda_graph_mem_pool if config.cuda_graph_mem_pool else torch.cuda.MemPool() -) self.memory_pool_handle = self.memory_pool.idLikely an incorrect or invalid review comment.
tensorrt_llm/_torch/compilation/backend.py (2)
26-27: LGTM: Explicit MemPool lifecycle management.Adding
_graph_poolas a class attribute enables explicit management of the CUDA memory pool lifecycle, which is cleaner than relying on implicit pool handles alone.
62-63: LGTM: Correct MemPool initialization.The initialization correctly creates a
MemPoolinstance and stores itsidas the handle, replacing the previous directgraph_pool_handle()call with explicit pool management.
|
/bot run --disable-fail-fast |
|
PR_Github #24683 [ run ] triggered by Bot. Commit: |
|
PR_Github #24683 [ run ] completed with state |
840ae4b to
729bf8f
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #24696 [ run ] triggered by Bot. Commit: |
|
/bot run --disable-fail-fast |
|
PR_Github #24707 [ run ] triggered by Bot. Commit: |
|
PR_Github #24696 [ run ] completed with state |
|
PR_Github #24707 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #24717 [ run ] triggered by Bot. Commit: |
|
PR_Github #24717 [ run ] completed with state |
64c6aea to
843a759
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #24749 [ run ] triggered by Bot. Commit: |
|
PR_Github #24749 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #24815 [ run ] triggered by Bot. Commit: |
|
PR_Github #24815 [ run ] completed with state |
|
/bot run |
|
PR_Github #31370 [ run ] triggered by Bot. Commit: |
|
PR_Github #31370 [ run ] completed with state
|
|
/bot run |
|
PR_Github #32268 [ run ] triggered by Bot. Commit: |
|
PR_Github #32268 [ run ] completed with state
|
Signed-off-by: Hui Gao <huig@nvidia.com>
5f1748d to
876dd68
Compare
|
/bot run |
|
PR_Github #36943 [ run ] triggered by Bot. Commit: |
|
PR_Github #36943 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #36974 [ run ] triggered by Bot. Commit: |
|
PR_Github #36974 [ run ] completed with state
|
|
/bot run --disable-fail-fast |
|
PR_Github #37040 [ run ] triggered by Bot. Commit: |
|
PR_Github #37040 [ run ] completed with state
|
Signed-off-by: Hui Gao <huig@nvidia.com>
|
/bot run --stage-list "DGX_H100-4_GPUs-PyTorch-Ray-1, DGX_H100-2_GPUs-PyTorch-Others-1" --disable-fail-fast |
|
PR_Github #37085 [ run ] triggered by Bot. Commit: |
|
PR_Github #37085 [ run ] completed with state
|
Signed-off-by: Hui Gao <huig@nvidia.com>
|
/bot run --stage-list "DGX_H100-4_GPUs-PyTorch-Ray-1" --disable-fail-fast |
|
PR_Github #37134 [ run ] triggered by Bot. Commit: |
|
PR_Github #37134 [ run ] completed with state
|
|
/bot run --stage-list "DGX_H100-4_GPUs-PyTorch-Ray-1" --disable-fail-fast |
|
PR_Github #37152 [ run ] triggered by Bot. Commit: |
|
PR_Github #37152 [ run ] completed with state
|
| best_fit_block.is_reserved = True | ||
| # A suitable buffer was found, so reuse it. | ||
| return self._view_as(best_fit_block.buffer, tensor_shape, dtype) | ||
| # else: |
There was a problem hiding this comment.
Just directly remove the commented out code?
Summary by CodeRabbit
Release Notes
Bug Fixes
Tests
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.