Skip to content

feat: add includeRawChunks support for streaming#360

Merged
robert-j-y merged 2 commits intomainfrom
devin/1769544989-include-raw-chunks
Jan 27, 2026
Merged

feat: add includeRawChunks support for streaming#360
robert-j-y merged 2 commits intomainfrom
devin/1769544989-include-raw-chunks

Conversation

@robert-j-y
Copy link
Contributor

@robert-j-y robert-j-y commented Jan 27, 2026

Description

Implements includeRawChunks support for streaming calls in both chat and completion models. When includeRawChunks: true is passed to doStream(), the provider emits { type: 'raw', rawValue: <parsed chunk> } stream parts for each SSE event, giving consumers access to the raw provider chunks alongside the processed AI SDK stream parts.

This follows the same implementation pattern used in the Vercel AI SDK's @ai-sdk/openai-compatible provider - raw chunks are emitted at the start of the transform function, before any other processing.

Before:

const { stream } = await model.doStream({ prompt, includeRawChunks: true });
// Raw chunks not available

After:

const { stream } = await model.doStream({ prompt, includeRawChunks: true });
// Stream now includes { type: 'raw', rawValue: { id: '...', choices: [...], ... } } parts

Closes #340

Updates since last revision

Added test coverage for failed parse scenarios to document the intentional behavior: raw chunks are emitted before validation, which is useful for debugging malformed responses. When parsing fails, consumers receive both the raw chunk (containing the unparseable data) and the error chunk.

Human Review Checklist

  • Verify raw chunk emission placement (before error handling) matches expected AI SDK behavior — Confirmed: matches the Vercel AI SDK @ai-sdk/openai-compatible reference implementation
  • Confirm chunk.rawValue from ParseResult contains the expected parsed JSON — Confirmed: rawValue is always available on ParseResult (both success and failure cases)
  • Verify behavior when parsing fails — Documented: new test confirms raw chunk is emitted before error handling, useful for debugging

Checklist

  • I have run pnpm stylecheck and pnpm typecheck
  • I have run pnpm test and all tests pass
  • I have added tests for my changes (if applicable)
  • I have updated documentation (if applicable)

Changeset

  • I have run pnpm changeset to create a changeset file

Link to Devin run: https://app.devin.ai/sessions/f147987c7c8344f8a0f287947c44a709
Requested by: Robert Yeakel (@robert-j-y)

When includeRawChunks: true is passed to streaming calls, the provider
now emits { type: 'raw', rawValue: <parsed chunk> } stream parts for
each SSE event, giving consumers access to the raw provider chunks
alongside the processed AI SDK stream parts.

This feature is available for both chat and completion models.

Closes #340

Co-Authored-By: Robert Yeakel <robert.yeakel@openrouter.ai>
Documents the intentional behavior that raw chunks are emitted before
validation, which is useful for debugging malformed responses. This
matches the Vercel AI SDK reference implementation pattern.

Co-Authored-By: Robert Yeakel <robert.yeakel@openrouter.ai>
@robert-j-y robert-j-y merged commit b129d36 into main Jan 27, 2026
2 checks passed
@robert-j-y robert-j-y deleted the devin/1769544989-include-raw-chunks branch January 27, 2026 20:41
@github-actions github-actions bot mentioned this pull request Jan 27, 2026
kesavan-byte pushed a commit to osm-API/ai-sdk-provider that referenced this pull request Feb 13, 2026
* feat: add includeRawChunks support for streaming

When includeRawChunks: true is passed to streaming calls, the provider
now emits { type: 'raw', rawValue: <parsed chunk> } stream parts for
each SSE event, giving consumers access to the raw provider chunks
alongside the processed AI SDK stream parts.

This feature is available for both chat and completion models.

Closes OpenRouterTeam#340

Co-Authored-By: Robert Yeakel <robert.yeakel@openrouter.ai>

* test: add test for raw chunk emission on failed parse

Documents the intentional behavior that raw chunks are emitted before
validation, which is useful for debugging malformed responses. This
matches the Vercel AI SDK reference implementation pattern.

Co-Authored-By: Robert Yeakel <robert.yeakel@openrouter.ai>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for ai sdk includeRawChunks option

1 participant