Skip to content

Conversation

JunyiXu-nv
Copy link
Collaborator

@JunyiXu-nv JunyiXu-nv commented Sep 3, 2025

Summary by CodeRabbit

  • New Features

    • Shared Harmony adapter instance with a public getter.
    • OpenAI-style streaming with delta updates and tool-call support.
    • Post-processing hooks for chat Harmony (streaming and non-streaming) with new request-derived args.
    • Enhanced usage reporting and finalization semantics; improved error handling.
  • Refactor

    • Reworked streaming into a stateful, token-driven pipeline with explicit inputs/outputs.
    • Simplified non-streaming response construction with explicit usage.
    • Server integration updated to use adapter-based postprocessing and separated streaming/non-streaming paths.

Description

This PR is enabling multiple postprocess workers in chat completions API, to minize host side benchmarking latency.

Experiment data with benchmark_serving:

  1. test_data/...baseline-4-workers: using v1/completions API and 4 postprocess workers for benchmarking.
  2. test_data/...no-post-process-workers: using v1/chat/completions API for benchmarking and do not use multiple post processors
  3. test_data/...chat-completions-4-workers: using v1/chat/completions API and 4 workers for benchmarking
gpt-oss-20b_pareto

unit of x and y axis is tps

The overall performance has an average 16.98% improvement on user throughput tps and 18.42% improvement on output throughput tps.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17536 [ run ] triggered by Bot

Copy link
Contributor

coderabbitai bot commented Sep 3, 2025

📝 Walkthrough

Walkthrough

The Harmony adapter is refactored to a shared-instance model, replacing generator-based streaming with a stateful, token-driven pipeline and explicit inputs/outputs. OpenAI server now routes through new post-processing hooks that call the adapter via get_harmony_adapter(). New ChatCompletionPostprocArgs and Harmony post-processors orchestrate both streaming and non-streaming paths.

Changes

Cohort / File(s) Summary of Changes
Harmony adapter overhaul
tensorrt_llm/serve/harmony_adapter.py
Introduces global adapter (serve_harmony_adapter) and getter; replaces generator-based streaming with stateless function returning OpenAI-style chunks; adds HarmonyStreamState; updates non-streaming handler; adds usage accounting helper; refactors tool handling; updates error handling; exports new/changed APIs.
OpenAI server integration
tensorrt_llm/serve/openai_server.py
Switches to get_harmony_adapter(); adds ChatCompletionPostprocArgs-based wiring; introduces create_streaming_generator/create_harmony_response with PostprocParams; updates streaming/non-streaming flows; integrates reasoning effort transform; extends imports/utilities.
Post-processing hooks
tensorrt_llm/serve/postprocess_handlers.py
Adds ChatCompletionPostprocArgs (+from_request); adds chat_harmony_post_processor and chat_harmony_streaming_post_processor delegating to adapter handlers; uses rsp.outputs, request_id, done; annotates with nvtx ranges.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant OpenAIServer as OpenAI Server
  participant LLM as LLM Engine
  participant Postproc as Post-processors
  participant Adapter as HarmonyAdapter

  Client->>OpenAIServer: ChatCompletion request
  OpenAIServer->>Postproc: ChatCompletionPostprocArgs.from_request()
  OpenAIServer->>LLM: Generate (with _postproc_params when enabled)
  alt Streaming
    LLM-->>OpenAIServer: GenerationResult chunks (outputs, _done, request_id)
    OpenAIServer->>Postproc: chat_harmony_streaming_post_processor(rsp, args)
    Postproc->>Adapter: handle_streaming_response(tools, tool_choice, outputs, model, request_id, done, num_prompt_tokens)
    Adapter-->>Postproc: OpenAI streaming chunks
    Postproc-->>OpenAIServer: chunks
    OpenAIServer-->>Client: data: chunks ... data: [DONE]
  else Non-streaming
    LLM-->>OpenAIServer: Final GenerationResult (outputs, num_prompt_tokens)
    OpenAIServer->>Postproc: chat_harmony_post_processor(rsp, args)
    Postproc->>Adapter: handle_non_streaming_response(tools, tool_choice, outputs, model, num_prompt_tokens)
    Adapter-->>Postproc: ChatCompletionResponse
    Postproc-->>OpenAIServer: response
    OpenAIServer-->>Client: ChatCompletionResponse
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • LinPoly
  • dongfengy
  • juney-nvidia
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (2)
tensorrt_llm/serve/harmony_adapter.py (1)

1656-1665: Add return type annotation to helper function

The _create_usage_info helper function should have a return type annotation.

-def _create_usage_info(num_prompt_tokens, outputs) -> UsageInfo:
+def _create_usage_info(num_prompt_tokens: int, outputs: List[Any]) -> UsageInfo:
tensorrt_llm/serve/openai_server.py (1)

707-729: Extract common response creation logic

The create_harmony_response and create_streaming_generator functions duplicate logic from the original openai_chat method. Consider extracting this to a shared utility.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ae51368 and 1eadcc4.

📒 Files selected for processing (3)
  • tensorrt_llm/serve/harmony_adapter.py (5 hunks)
  • tensorrt_llm/serve/openai_server.py (4 hunks)
  • tensorrt_llm/serve/postprocess_handlers.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Filenames compiled into a target must be case-insensitively unique

Files:

  • tensorrt_llm/serve/postprocess_handlers.py
  • tensorrt_llm/serve/harmony_adapter.py
  • tensorrt_llm/serve/openai_server.py
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use spaces, not tabs; indent 4 spaces

Files:

  • tensorrt_llm/serve/postprocess_handlers.py
  • tensorrt_llm/serve/harmony_adapter.py
  • tensorrt_llm/serve/openai_server.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs (Python)
Maintain module namespace on import: prefer from package.subpackage import foo; use foo.Symbol()
Python filenames use snake_case
Python class names use PascalCase
Python functions and methods use snake_case
Python local variables use snake_case; if starting with a number concept, prefix with k (e.g., k_99th_percentile)
Python global variables use G_ prefix with UPPER_SNAKE_CASE
Python constants use UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes
Initialize all externally visible class members in init
For public interfaces, prefer docstrings over comments; comments should be for in-function or file-local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes and variables inline with docstrings immediately after assignment
Avoid reflection when a non-reflective approach suffices
Limit except clauses to specific exceptions where possible
When using try/except for duck-typing, keep try body minimal and move logic to else

Files:

  • tensorrt_llm/serve/postprocess_handlers.py
  • tensorrt_llm/serve/harmony_adapter.py
  • tensorrt_llm/serve/openai_server.py
**/*.{cpp,cc,cxx,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/serve/postprocess_handlers.py
  • tensorrt_llm/serve/harmony_adapter.py
  • tensorrt_llm/serve/openai_server.py
🧬 Code graph analysis (3)
tensorrt_llm/serve/postprocess_handlers.py (4)
tensorrt_llm/serve/harmony_adapter.py (2)
  • handle_non_streaming_response (1571-1620)
  • handle_streaming_response (1488-1568)
tensorrt_llm/executor/result.py (3)
  • request_id (521-522)
  • GenerationResult (485-634)
  • outputs (197-212)
tensorrt_llm/serve/openai_protocol.py (6)
  • UsageInfo (69-72)
  • CompletionResponse (143-152)
  • ChatCompletionToolsParam (476-478)
  • ChatCompletionNamedToolChoiceParam (485-487)
  • ChatCompletionRequest (490-672)
  • ChatCompletionResponse (431-440)
tensorrt_llm/_utils.py (1)
  • nvtx_range_debug (872-896)
tensorrt_llm/serve/harmony_adapter.py (2)
tensorrt_llm/executor/result.py (1)
  • token_ids_diff (140-141)
tensorrt_llm/serve/openai_protocol.py (6)
  • ChatCompletionStreamResponse (461-467)
  • ChatCompletionResponseStreamChoice (452-458)
  • DeltaMessage (443-449)
  • ChatCompletionResponseChoice (417-428)
  • ChatMessage (358-363)
  • UsageInfo (69-72)
tensorrt_llm/serve/openai_server.py (4)
tensorrt_llm/llmapi/llm.py (4)
  • RequestOutput (46-86)
  • tokenizer (691-695)
  • tokenizer (698-699)
  • generate_async (317-450)
tensorrt_llm/executor/postproc_worker.py (1)
  • PostprocParams (37-39)
tensorrt_llm/executor/result.py (4)
  • aresult (576-584)
  • outputs (197-212)
  • request_id (521-522)
  • prompt_token_ids (525-526)
tensorrt_llm/serve/postprocess_handlers.py (6)
  • ChatCompletionPostprocArgs (366-379)
  • from_request (56-67)
  • from_request (277-284)
  • from_request (374-379)
  • chat_harmony_streaming_post_processor (393-403)
  • chat_harmony_post_processor (382-390)
🪛 Ruff (0.12.2)
tensorrt_llm/serve/openai_server.py

712-714: 1 blank line required between summary line and description

(D205)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tensorrt_llm/serve/harmony_adapter.py (2)

1487-1492: Module-level adapter instance looks good

The singleton pattern with serve_harmony_adapter and get_harmony_adapter() properly centralizes the adapter instance, supporting the refactored architecture for multiple postprocess workers.


1519-1519: Use existing function for usage info creation

On line 1519, you're calling _create_usage_info which is defined later on line 1656. However, on line 1608, you're calling the same function in non-streaming response. Consider moving the function definition earlier or ensure it's accessible.

tensorrt_llm/serve/postprocess_handlers.py (1)

357-372: Well-structured post-processing arguments class

The ChatCompletionPostprocArgs dataclass properly encapsulates the required parameters for post-processing, with a clean factory method following the existing pattern.

tensorrt_llm/serve/openai_server.py (4)

61-62: Clean import refactoring

Good consolidation of the harmony adapter imports under a single import statement.


734-734: Singleton adapter access pattern is appropriate

Using get_harmony_adapter() ensures consistent access to the global adapter instance, which is appropriate for the multi-worker architecture.


769-774: Proper integration of new post-processing handlers

The setup correctly wires the new harmony-specific post-processors based on streaming mode, maintaining consistency with the existing pattern.


786-788: Conditional prompt token assignment is correct

Only setting num_prompt_tokens when postproc workers are disabled ensures the value is available when needed for local post-processing.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17536 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13182 completed with status: 'FAILURE'

Signed-off-by: Junyi Xu <[email protected]>
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17543 [ run ] triggered by Bot

Signed-off-by: Junyi Xu <[email protected]>
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17546 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17543 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17546 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13191 completed with status: 'FAILURE'

Signed-off-by: Junyi Xu <[email protected]>
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17590 [ run ] triggered by Bot

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17602 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17590 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17602 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13235 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17859 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17859 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #13371 completed with status: 'SUCCESS'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants