Skip to content

fix(core): consolidate StreamChunkCallback, remove dual-extractor CAUSING TTS garbling#6690

Open
odilitime wants to merge 7 commits intodevelopfrom
odi-v3-streamfix
Open

fix(core): consolidate StreamChunkCallback, remove dual-extractor CAUSING TTS garbling#6690
odilitime wants to merge 7 commits intodevelopfrom
odi-v3-streamfix

Conversation

@odilitime
Copy link
Copy Markdown
Collaborator

@odilitime odilitime commented Mar 27, 2026

Eight inline onStreamChunk definitions across types/runtime.ts, types/model.ts, types/message-service.ts, streaming-context.ts, and runtime.ts are replaced by a single canonical StreamChunkCallback type alias in types/components.ts.

The callback gains accumulated?: string — the full extracted field text from ValidationStreamExtractor. WHY: handleMessage previously ran two independent XML extractors (ValidationStreamExtractor via dynamicPromptExecFromState + ResponseStreamExtractor via runWithStreamingContext). Both received raw LLM tokens in useModel and emitted independently, producing overlapping deltas that garbled TTS output. Providing accumulated text from the single remaining extractor eliminates the reassembly problem.

handleMessage no longer creates a ResponseStreamExtractor or wraps processMessage in runWithStreamingContext. Voice first-sentence detection wraps the caller's onStreamChunk callback and uses accumulated when available, falling back to a local buffer.

Docs updated: streaming-responses guide, types-reference, messaging.mdx extractor table.

Made-with: Cursor

Relates to

Risks

Background

What does this PR do?

What kind of change is this?

Documentation changes needed?

Testing

Where should a reviewer start?

Detailed testing steps


Note

Medium Risk
Touches core streaming and message handling flow and widens a public callback signature, which could break custom integrations and affect streaming/TTS behavior if any downstream expects the old (chunk, messageId) contract.

Overview
Fixes garbled streaming/TTS by removing DefaultMessageService’s extra ResponseStreamExtractor/runWithStreamingContext path so only the ValidationStreamExtractor pipeline emits chunks.

Consolidates all streaming chunk callbacks into a single exported StreamChunkCallback type and extends it to (chunk, messageId?, accumulated?), with ValidationStreamExtractor now providing authoritative per-field accumulated text and raw token streams explicitly passing undefined.

Updates action streaming to maintain its own filtered accumulation, propagates the new signature through runtime/context/model/message-service types and tests, and refreshes docs to describe the new callback contract and architecture change.

Written by Cursor Bugbot for commit aa5621e. This will update automatically on new commits. Configure here.

Summary by CodeRabbit

Release Notes

  • New Features

    • Streaming callbacks now receive accumulated text when available, enabling better handling of complete streamed responses.
    • Callbacks now support both synchronous and asynchronous execution.
  • Bug Fixes

    • Fixed garbled TTS output caused by extractor collision in the message pipeline.
  • Documentation

    • Updated streaming callback documentation with expanded parameter details and architectural clarifications.

Greptile Summary

This PR eliminates a dual-extractor race condition that caused TTS output garbling. Previously, DefaultMessageService.handleMessage wrapped processMessage in runWithStreamingContext with a ResponseStreamExtractor, while dynamicPromptExecFromState inside processMessage independently ran a ValidationStreamExtractor. Both extractors received the same raw LLM tokens from useModel (paramsChunk + ctxChunk), so consumers received two independent, overlapping delta streams — producing unintelligible TTS.

The fix removes the ResponseStreamExtractor + runWithStreamingContext layer entirely from handleMessage. A single canonical pipeline remains: VSE → MarkableExtractor → wrappedOnStreamChunk. Voice first-sentence detection moves into wrappedOnStreamChunk, which uses the new accumulated parameter (the authoritative full-field text from VSE) instead of re-assembling from deltas. Consolidating StreamChunkCallback into one type across eight previously-inconsistent call sites is a clean, forward-looking improvement.

Key changes:

  • StreamChunkCallback in types/components.ts gains accumulated?: string and widens return to void | Promise<void>, replacing 8 inline duplicate signatures
  • ValidationStreamExtractor.emitFieldContent now passes content as accumulated in its onChunk call
  • handleMessage no longer creates ResponseStreamExtractor or calls runWithStreamingContext; voice detection wraps the user callback directly
  • createStreamingContext and actionStreamingContext in runtime.ts forward accumulated unchanged through their respective filters
  • Raw-token paths in useModel correctly pass accumulated=undefined

Minor observations:

  • createStreamingContext forwards accumulated from the upstream source without transforming it, which is only semantically correct when the inner extractor is a passthrough. Currently always MarkableExtractor (safe), but the assumption is undocumented.
  • The streamTextFallback fallback buffer is never seeded from accumulated, so if a stream transitions from a VSE-based source to a raw-token source within one handleMessage call, the fallback path starts from empty. In practice firstSentenceSent is already true by that point, so voice won't re-trigger, but the invariant is fragile.

Confidence Score: 4/5

Safe to merge; correctly eliminates the dual-extractor TTS garbling bug with a clean, well-reasoned single-pipeline architecture

The root cause is correctly identified and fixed — removing runWithStreamingContext from handleMessage eliminates the second extractor path that caused overlapping deltas. The new wrappedOnStreamChunk pattern is equivalent to the old voice detection logic but operates on VSE-provided accumulated text instead of re-assembling from deltas. Type consolidation is a net positive. Two P2 style issues remain: the implicit passthrough assumption in createStreamingContext and the unsynchronized streamTextFallback buffer — neither affects correctness in the current code paths.

packages/typescript/src/services/message.ts (wrappedOnStreamChunk fallback path) and packages/typescript/src/utils/streaming.ts (createStreamingContext accumulated forwarding)

Important Files Changed

Filename Overview
packages/typescript/src/services/message.ts Core fix: removes dual-extractor pipeline by eliminating ResponseStreamExtractor+runWithStreamingContext; voice detection moved into wrappedOnStreamChunk wrapper with correct accumulated/fallback handling
packages/typescript/src/utils/streaming.ts ValidationStreamExtractorConfig.onChunk gains accumulated parameter; createStreamingContext updated to forward accumulated through MarkableExtractor passthrough; emitFieldContent correctly passes content as accumulated
packages/typescript/src/types/components.ts StreamChunkCallback consolidated to canonical form with accumulated parameter and widened return type void
packages/typescript/src/runtime.ts dynamicPromptExecFromState bridges VSE accumulated to StreamChunkCallback; actionStreamingContext correctly passes accumulated=undefined from raw-token context; inline type replaced with StreamingContext&{onStreamEnd}

Sequence Diagram

sequenceDiagram
    participant U as User / Client
    participant HM as handleMessage
    participant PM as processMessage
    participant DPE as dynamicPromptExecFromState
    participant VSE as ValidationStreamExtractor
    participant ME as MarkableExtractor
    participant WC as wrappedOnStreamChunk

    U->>HM: handleMessage(options.onStreamChunk)
    Note over HM: Creates wrappedOnStreamChunk<br/>(voice detection wrapper)
    HM->>PM: processMessage(opts.onStreamChunk=wrapped)
    Note over PM: createStreamingContext(MarkableExtractor,<br/>wrappedOnStreamChunk, responseId)
    PM->>DPE: dynamicPromptExecFromState(onStreamChunk=streamingCtx)
    Note over DPE: Creates ValidationStreamExtractor<br/>modelParams.onStreamChunk = chunk→VSE.push(chunk)
    DPE->>VSE: raw LLM token (paramsChunk)
    VSE->>ME: onChunk(delta, field, accumulated)
    ME->>WC: passthrough delta + accumulated
    WC->>WC: voice first-sentence detection<br/>(uses accumulated if available)
    WC-->>U: userOnStreamChunk(delta, msgId, accumulated)

    Note over HM: OLD (removed): runWithStreamingContext(RSE)<br/>caused ctxChunk to ALSO fire on same token → garbling
Loading

Reviews (1): Last reviewed commit: "fix(core): consolidate StreamChunkCallba..." | Re-trigger Greptile

Greptile also left 2 inline comments on this PR.

… garbling

Eight inline `onStreamChunk` definitions across types/runtime.ts,
types/model.ts, types/message-service.ts, streaming-context.ts, and
runtime.ts are replaced by a single canonical `StreamChunkCallback`
type alias in types/components.ts.

The callback gains `accumulated?: string` — the full extracted field
text from ValidationStreamExtractor. WHY: handleMessage previously ran
two independent XML extractors (ValidationStreamExtractor via
dynamicPromptExecFromState + ResponseStreamExtractor via
runWithStreamingContext). Both received raw LLM tokens in useModel and
emitted independently, producing overlapping deltas that garbled TTS
output. Providing accumulated text from the single remaining extractor
eliminates the reassembly problem.

handleMessage no longer creates a ResponseStreamExtractor or wraps
processMessage in runWithStreamingContext. Voice first-sentence
detection wraps the caller's onStreamChunk callback and uses
accumulated when available, falling back to a local buffer.

Docs updated: streaming-responses guide, types-reference,
messaging.mdx extractor table.

Made-with: Cursor
Copilot AI review requested due to automatic review settings March 27, 2026 08:26
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 27, 2026

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive PR description lacks critical required template sections including explicit risk level, testing steps, and deployment notes. Fill out: 1) Risks section with explicit 'Low/Medium/Large' level; 2) Testing section with reviewer start point and detailed steps; 3) Deployment/database changes if applicable. Use the template provided.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: consolidating StreamChunkCallback and removing dual extractors that caused TTS garbling.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch odi-v3-streamfix

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 27, 2026

Claude finished @odilitime's task in 2m 7s —— View job


PR Review: StreamChunkCallback Consolidation

  • Read and analyze changed files
  • Check for security issues
  • Verify test coverage
  • Check cross-language consistency
  • Provide detailed feedback

Summary

This PR consolidates eight inline onStreamChunk definitions into a single canonical StreamChunkCallback type and fixes a TTS garbling bug caused by dual XML extractors. The architecture change is sound - removing ResponseStreamExtractor from the message pipeline and using accumulated from ValidationStreamExtractor is the right approach.


Issues Found

❌ CRITICAL: Python/Rust StreamChunkCallback signatures not updated

Locations:

  • packages/python/elizaos/types/components.py:31
  • packages/python/elizaos/types/model.py:107
  • packages/rust/src/types/components.rs:244-245

The TypeScript signature now includes accumulated?: string as the third parameter:

(chunk: string, messageId?: string, accumulated?: string) => void | Promise<void>

But Python and Rust still have the old 2-parameter signature:

# Python (both locations)
StreamChunkCallback = Callable[[str, str | None], Awaitable[None]]

# Rust  
Box<dyn Fn(&str, Option<&str>) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + Sync>

Fix: Update all Python definitions to Callable[[str, str | None, str | None], Awaitable[None] | None] and Rust to include Option<&str> for accumulated.

Fix this →


⚠️ IMPORTANT: Missing tests for accumulated parameter in ValidationStreamExtractor

Location: packages/typescript/src/utils/streaming.ts:610, packages/typescript/src/__tests__/utils/streaming.test.ts

The onChunk callback now receives accumulated as the third parameter, but there are no tests verifying:

  1. accumulated contains the full field text (not just the delta)
  2. accumulated is passed correctly through the callback chain
  3. Voice first-sentence detection works with accumulated

The test change in runtime.test.ts:777-785 only updates the type signature, not behavior.

Fix: Add tests for ValidationStreamExtractor that verify accumulated matches the full extracted content after each chunk.

Fix this →


💡 SUGGESTION: Consider adding JSDoc @since tag

Location: packages/typescript/src/types/components.ts:376-380

The accumulated parameter is a new addition. Consider adding @since to help consumers understand API versioning:

/**
 * @param accumulated - Full extracted field text so far.
 * @since 2.0.0-alpha.109
 */

What's Good

  1. Type consolidation - Single source of truth for StreamChunkCallback eliminates drift
  2. Architecture fix - Removing ResponseStreamExtractor from handleMessage correctly solves the dual-extractor collision
  3. Backward compatible - accumulated is optional, so existing callbacks work unchanged
  4. Well documented - Comments explain WHY the changes were made, not just what changed
  5. Docs updated - streaming-responses.mdx and messaging.mdx properly reflect the new architecture

Verdict

The core TypeScript changes are solid and the TTS fix is architecturally correct. However, the cross-language type inconsistency is a breaking change for Python/Rust consumers that needs addressing before merge.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR consolidates streaming chunk callback typings into a single canonical StreamChunkCallback and updates the streaming pipeline to avoid dual extractor emissions that could garble TTS output.

Changes:

  • Introduces StreamChunkCallback (with optional accumulated) and replaces multiple inline callback signatures across runtime/model/message-service/streaming-context.
  • Extends ValidationStreamExtractor to surface accumulated text alongside deltas and threads it through streaming call sites.
  • Removes DefaultMessageService’s secondary ResponseStreamExtractor + runWithStreamingContext wrapper and instead performs first-sentence voice detection by wrapping the caller’s onStreamChunk.

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
packages/typescript/src/utils/streaming.ts Adds accumulated to VSE chunk emissions; updates createStreamingContext to use canonical callback type.
packages/typescript/src/types/runtime.ts Replaces inline onStreamChunk types with StreamChunkCallback.
packages/typescript/src/types/model.ts Replaces inline onStreamChunk type with StreamChunkCallback.
packages/typescript/src/types/message-service.ts Replaces inline onStreamChunk type with StreamChunkCallback.
packages/typescript/src/types/components.ts Defines canonical StreamChunkCallback and documents rationale/params.
packages/typescript/src/streaming-context.ts Updates StreamingContext.onStreamChunk to StreamChunkCallback.
packages/typescript/src/services/message.ts Removes dual-extractor streaming path; wraps onStreamChunk for voice detection using accumulated when present.
packages/typescript/src/runtime.ts Threads StreamChunkCallback through streaming paths; forwards accumulated from VSE; updates raw token loop call signature.
packages/typescript/src/tests/runtime.test.ts Updates streaming callback signature used in a test type definition.
packages/docs/runtime/types-reference.mdx Updates docs to reference StreamChunkCallback.
packages/docs/runtime/messaging.mdx Documents extractor architecture change and removal of default ResponseStreamExtractor usage.
packages/docs/guides/streaming-responses.mdx Adds documentation for StreamChunkCallback, including accumulated semantics.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

onChunk: (chunk) => {
options.onStreamChunk?.(chunk, streamMessageId);
onChunk: (chunk, _field, accumulated) => {
options.onStreamChunk?.(chunk, streamMessageId, accumulated);
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forwarding options.onStreamChunk from the extractor is not awaited or error-handled. Since StreamChunkCallback can return a Promise, a rejection here will become an unhandled promise rejection (and a synchronous throw will bubble out of the extractor). Consider wrapping the call in Promise.resolve(...).catch(...) (and/or signaling the extractor error) to prevent process-level unhandled rejection behavior during streaming.

Suggested change
options.onStreamChunk?.(chunk, streamMessageId, accumulated);
if (!options.onStreamChunk) return;
Promise.resolve(
options.onStreamChunk(chunk, streamMessageId, accumulated),
).catch((err) => {
// Prevent unhandled promise rejections from user callbacks
console.error("Error in onStreamChunk callback:", err);
});

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dismissed: The code snippet does not include the implementation at line 4084 where options.onStreamChunk is called, making it impossible to verify if the fix was applied

Comment on lines 175 to 310
@@ -190,7 +306,7 @@ export class DefaultMessageService implements IMessageService {
String(runtime.getSetting("MAX_MULTISTEP_ITERATIONS") ?? "6"),
10,
),
onStreamChunk: options?.onStreamChunk,
onStreamChunk: wrappedOnStreamChunk,
shouldRespondModel: resolvedShouldRespondModel,
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new streaming/voice wrapper behavior (using the accumulated parameter when available and falling back to local buffering) is not covered by tests. Given the repository has DefaultMessageService tests, add cases that assert: (1) accumulated drives first-sentence detection without re-accumulating deltas, (2) the fallback path still works for raw-token streams, and (3) voice generation is triggered at most once per message.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could not auto-fix (wrong file or repeated failures); manual review recommended.

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Comment on lines +1016 to 1026
onStreamChunk: async (
chunk: string,
msgId?: string,
accumulated?: string,
) => {
if (extractor.done) return;
const textToStream = extractor.push(chunk);
if (textToStream) {
retryState.appendText(textToStream);
await onStreamChunk(textToStream, msgId);
await onStreamChunk(textToStream, msgId, accumulated);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Forwarded accumulated may be inconsistent with textToStream for non-passthrough extractors

accumulated comes from the upstream caller (e.g., ValidationStreamExtractor's full field text) but textToStream goes through extractor.push(chunk), which could transform or filter the chunk. Currently MarkableExtractor is a pure passthrough so textToStream === chunk and the values are consistent, but the function's contract isn't enforced.

If a future caller passes a filtering extractor (e.g., one that strips punctuation), consumers would receive chunk = filteredDelta while accumulated = unfiltered full text, which is semantically contradictory. Consider adding a doc comment that makes the passthrough assumption explicit:

/**
 * NOTE: `accumulated` from the upstream source is forwarded unchanged.
 * This is only semantically correct when `extractor` is a passthrough
 * (i.e., extractor.push(chunk) === chunk). MarkableExtractor satisfies
 * this invariant; other extractors may not.
 */
onStreamChunk: async (
    chunk: string,
    msgId?: string,
    accumulated?: string,
) => {

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in aa5621e.

Comment on lines +194 to +207
let firstSentenceSent = false;
let firstSentenceText = "";
let streamTextFallback = "";
const userOnStreamChunk = options?.onStreamChunk;
const wrappedOnStreamChunk: StreamChunkCallback | undefined =
userOnStreamChunk
? async (chunk, messageId, accumulated) => {
let streamText: string;
if (accumulated !== undefined) {
streamText = accumulated;
} else {
streamTextFallback += chunk;
streamText = streamTextFallback;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 streamTextFallback is never seeded from the accumulated path

streamTextFallback only grows when accumulated === undefined. If a stream starts with the accumulated path (VSE-based) and later emits chunks without accumulated (e.g., a raw-token action stream in the same handleMessage call), the fallback buffer starts from empty and the first-sentence detection would operate on an incomplete view of the text.

In practice this is safe today because firstSentenceSent is already true by the time action streams begin (the VSE-based main message stream fires first), preventing re-triggering. But the invariant is fragile — if the main message stream finishes without sending a first sentence (e.g., very short reply), the fallback buffer for the action stream would have no knowledge of what was already emitted via accumulated.

Consider documenting this assumption explicitly, or keeping the fallback in sync:

// If we have accumulated text, also sync streamTextFallback so the
// fallback path has accurate state if the stream source later changes.
if (accumulated !== undefined) {
    streamTextFallback = accumulated;
    streamText = accumulated;
} else {
    streamTextFallback += chunk;
    streamText = streamTextFallback;
}

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in aa5621e.

@odilitime
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 27, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/src/runtime.ts`:
- Around line 1787-1789: The file uses the StreamingContext type in the
declaration of the variable actionStreamingContext but doesn't import it; add an
explicit import of StreamingContext from "./streaming-context" at the top of the
module so the type annotation (used in actionStreamingContext: (StreamingContext
& { onStreamEnd: () => void }) | undefined) resolves correctly—locate the
existing imports in runtime.ts and append or include StreamingContext in that
import list.
- Around line 4078-4085: The onChunk handler passed to ValidationStreamExtractor
currently fire-and-forgets options.onStreamChunk, losing backpressure and any
rejection; update the onChunk callback declared in the new
ValidationStreamExtractor call so it returns the result of options.onStreamChunk
(i.e., change onChunk: (chunk, _field, accumulated) => {
options.onStreamChunk?.(chunk, streamMessageId, accumulated); } to return
options.onStreamChunk?.(...) ), thereby propagating a Promise<void> when the
stream callback is async and preserving backpressure and error propagation for
ValidationStreamExtractor/StreamChunkCallback.

In `@packages/typescript/src/services/message.ts`:
- Around line 200-210: The current incremental handler (the async (chunk,
messageId, accumulated) => { ... }) uses streamTextFallback when accumulated is
missing, which can feed raw markup into
hasFirstSentence()/extractFirstSentence() and TTS; change it so that you only
run sentence detection or trigger first-sentence TTS when accumulated is present
(i.e., require accumulated before calling
hasFirstSentence/extractFirstSentence), or else sanitize the raw chunk before
using it (preferably by reusing the streaming sanitization used elsewhere) —
update the logic around streamTextFallback, firstSentenceSent, hasFirstSentence,
and extractFirstSentence to avoid using unsanitized chunks for preview audio.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7c02f500-63ac-4b50-9cc0-e74bebb1020e

📥 Commits

Reviewing files that changed from the base of the PR and between 376a098 and c3c0422.

📒 Files selected for processing (12)
  • packages/docs/guides/streaming-responses.mdx
  • packages/docs/runtime/messaging.mdx
  • packages/docs/runtime/types-reference.mdx
  • packages/typescript/src/__tests__/runtime.test.ts
  • packages/typescript/src/runtime.ts
  • packages/typescript/src/services/message.ts
  • packages/typescript/src/streaming-context.ts
  • packages/typescript/src/types/components.ts
  • packages/typescript/src/types/message-service.ts
  • packages/typescript/src/types/model.ts
  • packages/typescript/src/types/runtime.ts
  • packages/typescript/src/utils/streaming.ts

Iteration 1

prr-fix:prrc_kwdomt5cis6yy4vv
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 27, 2026

PR Review: StreamChunkCallback Consolidation

  • Read and analyze changed files
  • Check for security issues
  • Verify test coverage
  • Check cross-language consistency (Python/Rust)
  • Review fixes from recent commits (709c8d0 to 851edbd)
  • Provide detailed feedback

View job run

Iteration 2

prr-fix:prrc_kwdomt5cis6yy-8u
prr-fix:prrc_kwdomt5cis6yy-8b
Iteration 2

prr-fix:prrc_kwdomt5cis6yy-8g
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 27, 2026

Claude finished @odilitime's task in 1m 38s —— View job


PR Review: StreamChunkCallback Consolidation

  • Read and analyze changed files
  • Check recent commits (709c8d0 to 851edbd) for fixes
  • Verify cross-language consistency (Python/Rust)
  • Check test coverage
  • Review for security issues
  • Provide detailed feedback

Summary

This PR correctly fixes the dual-extractor TTS garbling bug by removing ResponseStreamExtractor from handleMessage and consolidating to a single ValidationStreamExtractor pipeline. The architecture change is sound.

Recent commits (709c8d0 to 851edbd) addressed several review comments:

  • onChunk now returns the promise (backpressure fix)
  • streamTextFallback is synced from accumulated
  • First-sentence TTS requires accumulated to be present (prevents raw markup garbling)
  • Action streaming tracks its own filteredAccumulated to avoid upstream/downstream mismatch

Issues

❌ CRITICAL: Python StreamChunkCallback signatures inconsistent and missing accumulated

Locations:

  • packages/python/elizaos/types/components.py:31 - Callable[[str, str | None], Awaitable[None]]
  • packages/python/elizaos/types/model.py:107 - Callable[[str, str | None], Awaitable[None] | None]
  • packages/python/elizaos/services/message_service.py:21 - Callable[[str], Coroutine[Any, Any, None]] (only 1 param!)

TypeScript signature: (chunk: string, messageId?: string, accumulated?: string) => void | Promise<void>

Fix: Update all Python definitions to Callable[[str, str | None, str | None], Awaitable[None] | None]


❌ CRITICAL: Rust StreamChunkCallback missing accumulated parameter

Location: packages/rust/src/types/components.rs:244-245

Current: Box<dyn Fn(&str, Option<&str>) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + Sync>

Fix: Add third Option<&str> parameter for accumulated


⚠️ IMPORTANT: Missing tests for accumulated parameter behavior

The test at runtime.test.ts:778-785 only updates the type signature. No tests verify:

  1. ValidationStreamExtractor.onChunk receives correct accumulated values
  2. Voice first-sentence detection works with accumulated
  3. Fallback path behavior when accumulated is undefined

💡 SUGGESTION: Document passthrough assumption in createStreamingContext

Location: packages/typescript/src/utils/streaming.ts around line 1015-1027

The function forwards accumulated unchanged, which is only semantically correct when the extractor is a passthrough (currently always MarkableExtractor). Consider adding a comment documenting this invariant.


What's Good

  1. Root cause correctly identified - Removing the second extractor eliminates overlapping deltas
  2. Type consolidation - Single StreamChunkCallback eliminates drift across 8 call sites
  3. Backward compatible - accumulated is optional, existing callbacks work unchanged
  4. Well-documented - Inline comments explain WHY changes were made
  5. Docs updated - streaming-responses.mdx and messaging.mdx reflect new architecture

Verdict

The TypeScript changes are solid. The cross-language type inconsistency in Python/Rust needs addressing before merge to maintain API consistency across the SDK.

Explains reasoning for dismissed issues inline in code
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 27, 2026

Claude finished @odilitime's task in 1m 35s —— View job


PR Review: StreamChunkCallback Consolidation

  • Read CLAUDE.md for repo guidelines
  • Read changed files and verify previous fixes
  • Check for security issues
  • Verify test coverage
  • Check cross-language consistency (Python/Rust)
  • Provide detailed feedback

Summary

This PR correctly fixes the dual-extractor TTS garbling bug by removing ResponseStreamExtractor from handleMessage and consolidating to a single ValidationStreamExtractor pipeline. The architecture change is sound.

Verified fixes from previous review rounds:

  • StreamingContext is now imported (line 30)
  • onChunk returns the promise for backpressure (lines 4092-4097)
  • Passthrough assumption documented with NOTE comment (streaming.ts:1017-1021)
  • streamTextFallback synced from accumulated (message.ts:202-210)
  • First-sentence TTS requires accumulated (message.ts:215)
  • filteredAccumulated tracked locally for action streams (runtime.ts:1797-1813)

Remaining Issues

❌ CRITICAL: Python StreamChunkCallback missing accumulated parameter

Location: packages/python/elizaos/types/components.py:31

StreamChunkCallback = Callable[[str, str | None], Awaitable[None]]

TypeScript now has 3 params: (chunk, messageId?, accumulated?). Python still has 2.

Fix: Update to Callable[[str, str | None, str | None], Awaitable[None] | None]

Fix this →


❌ CRITICAL: Rust StreamChunkCallback missing accumulated parameter

Location: packages/rust/src/types/components.rs:244-245

pub type StreamChunkCallback =
    Box<dyn Fn(&str, Option<&str>) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + Sync>;

Fix: Add third Option<&str> parameter for accumulated

Fix this →


⚠️ IMPORTANT: Missing tests for accumulated parameter behavior

Location: Tests only update type signature (runtime.test.ts:778-785)

No tests verify:

  1. ValidationStreamExtractor.onChunk receives correct accumulated values
  2. Voice first-sentence detection works with accumulated
  3. Fallback path behavior when accumulated is undefined

Fix this →


What's Good

  1. Root cause correctly identified - Single pipeline eliminates overlapping deltas
  2. Type consolidation - Single StreamChunkCallback across 8 call sites
  3. Backward compatible - accumulated is optional
  4. Well documented - Inline comments explain WHY
  5. Docs updated - streaming-responses.mdx and messaging.mdx reflect architecture

Verdict

Safe to merge with caveats. The TypeScript changes are solid and the TTS fix is architecturally correct. The cross-language type inconsistency in Python/Rust should be addressed before or shortly after merge to maintain SDK consistency.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants