Skip to content

Conversation

@SimengLiu-nv
Copy link
Collaborator

@SimengLiu-nv SimengLiu-nv commented Dec 8, 2025

…nd streaming. Added tests accordingly.

Summary by CodeRabbit

  • Bug Fixes

    • Fixed an issue where tokens could be re-decoded incorrectly when generating multiple outputs simultaneously, ensuring accurate text generation.
  • Tests

    • Added comprehensive test coverage for streaming batch completions with multiple output options to improve reliability.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

tests/unittest/llmapi/apps/_test_openai_completions.py::test_batch_completions_with_option_n_streaming

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@SimengLiu-nv SimengLiu-nv requested a review from ixlmar December 8, 2025 19:47
@SimengLiu-nv SimengLiu-nv marked this pull request as ready for review December 8, 2025 19:48
@SimengLiu-nv SimengLiu-nv requested a review from a team as a code owner December 8, 2025 19:48
@SimengLiu-nv
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 8, 2025

📝 Walkthrough

Walkthrough

The pull request includes a bug fix in the detokenization pipeline to prevent re-decoding of already-processed tokens when handling multiple outputs, and adds a new streaming completions test case that validates batch operations with semantic similarity checks.

Changes

Cohort / File(s) Summary
Detokenization state tracking
tensorrt_llm/executor/result.py
Updates beam_output._last_token_ids_len after detokenization to track processed tokens and prevent redundant re-decoding in multi-output scenarios (n > 1).
Streaming completions test
tests/unittest/llmapi/apps/_test_openai_completions.py
Adds new test test_batch_completions_with_option_n_streaming for async OpenAI batch completions with streaming enabled. Test creates non-streaming reference completion, runs streaming batch with n=3, and validates outputs using semantic similarity threshold (0.8).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

  • executor/result.py: Single state assignment after detokenization—straightforward fix with minimal logic.
  • _test_openai_completions.py: New test case logic involves understanding streaming batch behavior and similarity-based validation; requires verification that the test properly covers the multi-output streaming scenario.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning The PR description is incomplete and does not follow the required template. Critical sections including a detailed explanation of the issue and the solution are missing or empty. Add a comprehensive description explaining what the issue is, why it occurs, and how the fix addresses it. Provide context for why n>1 with streaming causes corruption and how the detokenization state tracking resolves this.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly follows the required format [NVBugs ID][type] Summary and accurately describes the main fix for output corruption with n>1 and streaming.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/unittest/llmapi/apps/_test_openai_completions.py (1)

8-8: Consider maintaining namespace per coding guidelines.

Per the coding guidelines: "Always maintain the namespace when importing in Python, even if only one class or function from a module is used." Consider changing to from utils import util and using util.similar() instead.

However, this matches the existing import style in the file (lines 12-13), so this is a low-priority suggestion.

As per coding guidelines, prefer namespace imports.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0a09465 and 997d8d7.

📒 Files selected for processing (2)
  • tensorrt_llm/executor/result.py (1 hunks)
  • tests/unittest/llmapi/apps/_test_openai_completions.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tensorrt_llm/executor/result.py
  • tests/unittest/llmapi/apps/_test_openai_completions.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tensorrt_llm/executor/result.py
  • tests/unittest/llmapi/apps/_test_openai_completions.py
🧠 Learnings (4)
📚 Learning: 2025-08-18T08:42:02.640Z
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.

Applied to files:

  • tensorrt_llm/executor/result.py
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • tensorrt_llm/executor/result.py
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.

Applied to files:

  • tensorrt_llm/executor/result.py
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/executor/result.py
🪛 Ruff (0.14.7)
tests/unittest/llmapi/apps/_test_openai_completions.py

230-230: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?

(RUF003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/executor/result.py (1)

634-636: LGTM! Critical fix for streaming with n > 1.

The update to _last_token_ids_len after detokenization correctly prevents re-decoding of already-processed tokens in subsequent streaming responses when multiple outputs (n > 1) are generated. This ensures each beam output's token tracking remains consistent across streaming chunks.

tests/unittest/llmapi/apps/_test_openai_completions.py (1)

208-239: Good test coverage for the streaming + n>1 fix.

The test properly validates the fix by testing batch completions with n=3 in streaming mode, which directly exercises the code path fixed in result.py. The similarity check with threshold=0.8 is reasonable given potential non-deterministic variations.

However, verify that the similar() function from utils.util exists and confirm its behavior is appropriate for this validation.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27352 [ run ] triggered by Bot. Commit: 997d8d7

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27352 [ run ] completed with state SUCCESS. Commit: 997d8d7
/LLM/main/L0_MergeRequest_PR pipeline #20896 completed with status: 'FAILURE'

@SimengLiu-nv SimengLiu-nv force-pushed the fix-option-n-streaming branch from 997d8d7 to 3f94aff Compare December 9, 2025 05:46
@SimengLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27432 [ run ] triggered by Bot. Commit: 3f94aff

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27432 [ run ] completed with state SUCCESS. Commit: 3f94aff
/LLM/main/L0_MergeRequest_PR pipeline #20960 completed with status: 'FAILURE'

Copy link
Collaborator

@ixlmar ixlmar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@stnie Could you provide some feedback on the bugfix, please?


# Update _last_token_ids_len after detokenization to prevent
# re-decoding the same tokens in subsequent responses when n > 1.
beam_output._last_token_ids_len = len(beam_output.token_ids)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CC @stnie

@ixlmar ixlmar requested a review from stnie December 9, 2025 10:42
@SimengLiu-nv SimengLiu-nv force-pushed the fix-option-n-streaming branch from 3f94aff to fa294e6 Compare December 10, 2025 21:13
@SimengLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27748 [ run ] triggered by Bot. Commit: fa294e6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27748 [ run ] completed with state SUCCESS. Commit: fa294e6
/LLM/main/L0_MergeRequest_PR pipeline #21175 completed with status: 'FAILURE'

@SimengLiu-nv SimengLiu-nv force-pushed the fix-option-n-streaming branch from fa294e6 to bf8fc92 Compare December 11, 2025 17:46
@SimengLiu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27902 [ run ] triggered by Bot. Commit: bf8fc92

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27902 [ run ] completed with state SUCCESS. Commit: bf8fc92
/LLM/main/L0_MergeRequest_PR pipeline #21304 completed with status: 'FAILURE'

@SimengLiu-nv SimengLiu-nv force-pushed the fix-option-n-streaming branch from bf8fc92 to 112988a Compare January 5, 2026 19:20
@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30614 [ run ] triggered by Bot. Commit: 112988a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30614 [ run ] completed with state SUCCESS. Commit: 112988a
/LLM/main/L0_MergeRequest_PR pipeline #23623 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

…nd streaming. Added tests accordingly.

Signed-off-by: SimengLiu-nv <[email protected]>
Signed-off-by: SimengLiu-nv <[email protected]>
Signed-off-by: SimengLiu-nv <[email protected]>
@SimengLiu-nv SimengLiu-nv force-pushed the fix-option-n-streaming branch from 112988a to 59261df Compare January 6, 2026 21:35
@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@SimengLiu-nv SimengLiu-nv enabled auto-merge (squash) January 6, 2026 21:35
@tensorrt-cicd
Copy link
Collaborator

PR_Github #30786 [ run ] triggered by Bot. Commit: 59261df

@tensorrt-cicd
Copy link
Collaborator

PR_Github #30786 [ run ] completed with state SUCCESS. Commit: 59261df
/LLM/main/L0_MergeRequest_PR pipeline #23768 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

beam_output.token_ids) != prev_token_lens.get(
id(beam_output), 0)
if not output_received_new_tokens:
continue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could lead to wrong outputs when using beam search.
If one beam exits early, it may in the next iteration swap its tokens with another non-exited beam without adding a new token. As the number of tokens did not change, the swapped tokens will not be decoded, which could result in a wrong output.

You may adjust this line to

if not output_received_new_tokens and not self.sampling_params.use_beam_search:
  continue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants