Skip to content

Conversation

astralord
Copy link
Contributor

@astralord astralord commented Oct 6, 2025

Hi team!

Purpose

We've noticed that that the recent PR doesn't fully fix gpt-oss + streaming + speculative-decoding issue, for example generated messages end abruptly. This happens because multiple tokens can relate to different channels (e.g. <final><analysis>None) in one decoding stage. This PR handles it.

Test Plan

Test Result

@mergify mergify bot added frontend gpt-oss Related to GPT-OSS models labels Oct 6, 2025
Copy link

mergify bot commented Oct 6, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @astralord.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a bug in handling multiple channels for gpt-oss with speculative decoding, particularly in streaming mode. The changes introduce a more robust mechanism by tracking the state for each token, grouping them by channel and recipient, and then constructing the delta messages. This ensures that channel switches within a single decoding step are handled correctly. The related logging improvements are also a good addition, providing more comprehensive output. I have one suggestion to improve code readability and reduce the risk of future bugs by removing a magic number.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Comment on lines 775 to 777
if group_text:
tool_messages.append(DeltaToolCall(
index=base_index - 1, # Use the index of the current tool call

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid off-by-one when streaming ongoing tool call arguments

Within the Harmony streaming branch, base_index is derived from harmony_parser.messages to count previously completed tool calls, the same value the prior implementation used directly for both opening a call and streaming additional arguments. The new logic subtracts one when emitting argument deltas (index=base_index - 1). When a tool call spans multiple decoding iterations, base_index remains 0 (no completed calls yet), so subsequent argument chunks are sent with index=-1, which clients cannot associate with the active tool call. This breaks streaming for any tool call whose arguments arrive over multiple chunks.

Useful? React with 👍 / 👎.

@astralord astralord force-pushed the fix-gpt-oss-with-speculative-decoding-handle-multiple-channels branch 2 times, most recently from a953d9c to ede4584 Compare October 6, 2025 11:37
@mergify mergify bot removed the needs-rebase label Oct 6, 2025
@astralord astralord force-pushed the fix-gpt-oss-with-speculative-decoding-handle-multiple-channels branch from 704867c to 3c1bf55 Compare October 6, 2025 11:40
@astralord
Copy link
Contributor Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Comment on lines 793 to 799
groups = []
for channel, recipient, text in token_states:
if not text:
continue
if groups and groups[-1]['channel'] == channel and groups[-1]['recipient'] == recipient:
groups[-1]['text'] += text
else:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Emit tool call deltas even when first tokens carry no text

The new grouping logic skips any token whose last_content_delta is empty before building groups. When a speculative chunk switches channels (e.g., starts a new commentary function call) the parser often changes current_channel/current_recipient using structural tokens that have no textual delta. Previously, the code still emitted a DeltaToolCall whenever prev_recipient != cur_recipient even if delta_text was empty, so clients immediately saw the tool name. With the if not text: continue filter, those structural tokens are ignored and no DeltaToolCall is streamed until arguments appear; calls with empty arguments are never surfaced at all. This regression breaks streaming compatibility for Harmony tool calls that begin with metadata-only tokens.

Useful? React with 👍 / 👎.

@astralord astralord force-pushed the fix-gpt-oss-with-speculative-decoding-handle-multiple-channels branch from 3c1bf55 to e1f14dd Compare October 6, 2025 11:55
@astralord
Copy link
Contributor Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Comment on lines 821 to 823
# next_tool_index tracks the index for the next NEW tool call
next_tool_index = base_index

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Increment tool index when new call follows ongoing call

next_tool_index is initialised from base_index and only increased when prev_recipient != group_recipient. If this chunk first streams more arguments for an already-active tool call and then starts a second tool call, the continuation leaves next_tool_index unchanged, so the new call is emitted with the same index as the ongoing call and its arguments are also written under that index. The OpenAI streaming API expects tool call indexes to be unique and monotonically increasing, so this mislabels calls and causes clients to merge two separate calls. Consider advancing next_tool_index when a continuation is emitted or computing base_index to include the in-flight call.

Useful? React with 👍 / 👎.

@astralord astralord force-pushed the fix-gpt-oss-with-speculative-decoding-handle-multiple-channels branch from e4f6360 to 3ad1d7b Compare October 6, 2025 12:19
@astralord
Copy link
Contributor Author

@codex review

Copy link

Codex Review: Didn't find any major issues. You're on a roll.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

@astralord
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request provides a much-needed fix for handling multiple channels in gpt-oss streaming with speculative decoding. The previous implementation had a flaw where it only considered the state after the last token in a chunk, which could lead to data loss or incorrect message construction if the channel or recipient changed within the chunk.

The new approach is robust and correctly handles this complex scenario. Key improvements include:

  • Tracking the state (channel, recipient, delta) for each individual token.
  • Grouping consecutive tokens with the same state for efficient processing.
  • Refactoring the logic to build a single, comprehensive DeltaMessage that can contain content, reasoning, and tool calls from a single chunk.
  • Improving the indexing logic for tool calls, correctly handling calls that span across multiple streamed chunks.
  • Enhancing logging to be more comprehensive.

The changes significantly increase the correctness and reliability of streaming for gpt-oss models. The implementation is well-structured, and the added complexity is justified by the problem it solves. I don't see any issues with the proposed changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend gpt-oss Related to GPT-OSS models
Projects
Status: To Triage
Development

Successfully merging this pull request may close these issues.

1 participant