Skip to content

🆕 WorkflowAgent (@ai-sdk/workflow)#12165

Open
gr2m wants to merge 89 commits intomainfrom
gr2m/durable-agent
Open

🆕 WorkflowAgent (@ai-sdk/workflow)#12165
gr2m wants to merge 89 commits intomainfrom
gr2m/durable-agent

Conversation

@gr2m
Copy link
Copy Markdown
Collaborator

@gr2m gr2m commented Jan 30, 2026

Create a new @ai-sdk/workflow package that exports WorkflowAgent, which will be the successor of DurableAgent

ToolLoopAgent parity plan

The underlying streamText in core already supports all 6 callback types. The gap is that WorkflowAgent doesn't accept or pass them through. However, WorkflowAgent doesn't call streamText directly — it uses streamTextIterator → doStreamStep → streamModelCall. So the callbacks need to be threaded through that chain.

Phase 1: Wire missing callbacks through WorkflowAgent API ✅ (#14036)

  1. Add the 4 missing callback types to WorkflowAgentOptions and WorkflowAgentStreamOptions interfaces
  2. Add mergeCallbacks utility (extracted from ToolLoopAgent pattern)
  3. Pass callbacks through streamTextIterator to doStreamStep (which uses streamModelCall)
  4. Emit callbacks at the right points in the iterator loop

Unblocked 14 of 16 GAP tests. 2 remain as it.fails() to track event shape parity (see below).

Remaining work from Phase 1: Align callback event shapes with ToolLoopAgent. WorkflowAgent's callback events are simpler than ToolLoopAgent's. ToolLoopAgent events (defined in core-events.ts) include callId, provider, modelId, stepNumber, messages, abortSignal, functionId, metadata, experimental_context, typed toolCall with TypedToolCall<TOOLS>, and durationMs on tool call finish. WorkflowAgent events currently only provide a subset (e.g., onToolCallStart only has toolCall with a plain ToolCall type). Once the event shapes converge, the callback types could be unified as shared AgentOnStartCallback, AgentOnStepStartCallback, etc. instead of separate WorkflowAgent* and ToolLoopAgent* types.

Phase 2: Add prepareCall support ✅ (#14037)

  1. Add prepareCall to WorkflowAgentOptions
  2. Call it in stream() before the iterator, similar to ToolLoopAgent's prepareCall()

Unblocked 1 GAP test.

Remaining work from Phase 2: ToolLoopAgent's prepareCall also supports stopWhen, activeTools, and experimental_download in its input/output types — these are not yet in WorkflowAgent's PrepareCallOptions/PrepareCallResult. Additionally, ToolLoopAgent supports typed CALL_OPTIONS that flow through prepareCall as options — WorkflowAgent doesn't have this concept.

Phase 3: Add workflow serialization support to all provider models ✅ (#13779)

Adds WORKFLOW_SERIALIZE/WORKFLOW_DESERIALIZE to all 59 provider model classes (language, image, embedding, speech, transcription, video). Adds serializeModel() and deserializeModelConfig() helpers to @ai-sdk/provider-utils:

  • serializeModel resolves config.headers() at serialization time so auth credentials survive the step boundary as plain key-value objects
  • deserializeModelConfig wraps plain-object headers back into a function on deserialization

Makes headers optional in all provider config types so deserialized models work without pre-configured auth. Includes documentation for third-party provider authors.

Remaining work from Phase 3: async headers providers. Four providers have async getHeaders which can't be resolved synchronously at serialization time. These need per-provider handling or a model factory function workaround:

  • Gateway — async OIDC token resolution (AI_GATEWAY_API_KEY env var fallback)
  • Amazon Bedrock (anthropic subprovider) — async SigV4 credential loading (AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY env vars)
  • KlingAI — async JWT generation from KLINGAI_ACCESS_KEY/KLINGAI_SECRET_KEY env vars
  • Google Vertex — async Resolvable headers (GOOGLE_VERTEX_API_KEY env var for express mode)

Phase 4: Add needsApproval support ✅ (#14084)

  1. Before executing a tool, check tool.needsApproval (boolean or async function)
  2. If approval needed, pause the loop and return pending tool calls (like client-side tools)
  3. Handle approval resumption: collect tool-approval-response parts, execute approved tools, create denial results
  4. Write tool results and step boundaries to the UI stream so tool parts transition to output-available state and convertToModelMessages produces correct message structure for multi-turn conversations

This unblocked 2 GAP tests.

Phase 5: Telemetry integration listeners

  1. Wire through telemetry integration listeners from experimental_telemetry
  2. Call them alongside agent callbacks at each lifecycle point

This unblocks 3 GAP tests.

Phase 6: Clean up duplication

  1. Extract shared mergeCallbacks utility ✅ (done in feat(workflow): add onStart, onStepStart, onToolCallStart, onToolCallFinish callbacks #14036 — moved to ai/internal, used by both ToolLoopAgent and WorkflowAgent)
  2. Remove duplicate filterTools (use one location)
  3. Replace getErrorMessage with import from @ai-sdk/provider-utils
  4. Remove safeParseInput if unused
  5. Simplify prepareStep override application in streamTextIterator

Future work

Done

  • Separate UIMessageChunk conversion from model streaming (refactor: separate UIMessageChunk conversion from model streaming #13780)
    Extracts UIMessageChunk conversion from doStreamStep into a standalone utility, making the model streaming layer independent of UI concerns. doStreamStep returns raw LanguageModelV4StreamPart[] chunks; UIMessageChunk conversion is a separate, optional step. writable becomes optional in WorkflowAgentStreamOptions — when omitted, the agent streams ModelMessages only. Follows streamText's toUIMessageStream() pattern.
  • Use experimental_streamModelCall in doStreamStep (refactor: use experimental_streamModelCall in doStreamStep #13820)
    Replace doStreamStep internals with experimental_streamModelCall. Eliminates ~300 lines of duplicated stream transformation, gains tool call parsing/repair, retry logic, and Experimental_ModelCallStreamPart stream types.
  • Export mergeAbortSignals from ai/internal (refactor: use shared mergeAbortSignals from ai/internal in WorkflowAgent #13616)
    Exports the existing mergeAbortSignals utility from ai/internal and replaces the manual ~25-line abort signal + timeout merging code in WorkflowAgent with the shared utility. Uses AbortSignal.timeout() instead of manual setTimeout + AbortController, matching how generateText/streamText handle the same concern.
  • Wire missing callbacks (Phase 1) (feat(workflow): add onStart, onStepStart, onToolCallStart, onToolCallFinish callbacks #14036)
    Adds experimental_onStart, experimental_onStepStart, experimental_onToolCallStart, experimental_onToolCallFinish callbacks to WorkflowAgent. Extracts mergeCallbacks into ai/internal as shared utility used by both ToolLoopAgent and WorkflowAgent. Also fixes sideEffects: false breaking workflow step discovery and replaces resolveLanguageModel from ai/internal with gateway from ai to fix Next.js webpack resolution in step bundles.
  • Add prepareCall support (Phase 2) (feat(workflow): add prepareCall callback #14037)
    Adds prepareCall callback to WorkflowAgentOptions, called once before the agent loop to transform model, instructions, generation settings, etc. tools excluded from return type since they're bound at construction time for type safety.

No longer pursued

Notes

  • Resumption: the client-side is expected to specify which chunk to resume from at the workflow stream level (the V4StreamPart). If that maps 1-1 to UIMessageChunk, or if there's a way to map the correct resumption index from the number of UIMessageChunk parts that the client-side has received, that's what needs to be considered.
  • repairToolCall not serializable across step boundaries. ToolCallRepairFunction is a function and can't cross the 'use step' serialization boundary. Left out of WorkflowAgent for now — experimental_streamModelCall handles repair internally when called outside a step boundary.
  • abortSignal serialization. AbortSignal objects can't be serialized across step boundaries. The Workflow team is working on adding serialization support for abort signals.
  • Tools validation: because of serialization/deserialization needed for step functions, we use validation capabilities and transformations of libraries like zod. Serialization is lossy.

Related issues

@vercel-ai-sdk vercel-ai-sdk bot added the maintenance CI, internal documentation, automations, etc label Jan 30, 2026
@gr2m gr2m changed the title 🚧 DurableAgent 🚧 DurableAgent - DO NOT MERGE Jan 30, 2026
@gr2m

This comment was marked as resolved.

@gr2m gr2m marked this pull request as ready for review January 30, 2026 19:33
Copy link
Copy Markdown
Contributor

@vercel-ai-sdk vercel-ai-sdk bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SHALL NOT PASS

@gr2m
Copy link
Copy Markdown
Collaborator Author

gr2m commented Jan 30, 2026

The snapshot build published (https://github.com/vercel/ai/actions/runs/21530986703/job/62046593364) published @ai-sdk/durable-agent@0.0.1 instead of the usual snapshot versions. That was not planned.

@KaiKloepfer
Copy link
Copy Markdown

@gr2m I did some work on improving this from the workflow side, might be relevant to you? vercel/workflow#928 Still not full compatibility, but we got stuck on the same issues when trying to port a v6 app to use workflow.

@rovo89
Copy link
Copy Markdown

rovo89 commented Feb 12, 2026

@gr2m I think this is the right approach, avoiding a lot of compatibility layers. However, it seems that the code is copied more or less 1:1 from the workflow code, which has simplified lots of things. For example, OpenAI's web_search doesn't work for me because toolsToModelTools() is extremely basic. I'm hoping this will be much closer to the original ToolLoopAgent.

I would offer to help, but I assume you already have own ideas how things should work (and I know little about the AI SDK internals). Anyway, if I can do anything, I'm happy to help.

@gr2m
Copy link
Copy Markdown
Collaborator Author

gr2m commented Feb 12, 2026

@KaiKloepfer thanks will have a look!

@rovo89 I'm focused on #12381 right now, please feel free to send PRs for exploration of different approaches.

@rovo89
Copy link
Copy Markdown

rovo89 commented Feb 12, 2026

My current attempt is to simpy use ToolLoopAgent in a step function. 🙈

export async function chat(writable: WritableStream<UIMessageChunk>, messages: UIMessage[]) {
  'use step';
  const agent = new ToolLoopAgent({...});
  const stream = await createAgentUIStream({
    agent,
    uiMessages: messages,
  })
  await stream.pipeTo(writable)
}

Streaming works fine and I can use it exactly like I'm used to.

DurableAgent has quite some limitations:

  • Many details are implemented as very simplified stubs, such as the tool preparation which prevents using OpenAI provider tools.
  • The default downloader doesn't work because fetch is blocked (need to use their fetch step function instead) and supportedUrls is empty.
  • Model has to be provided as step function, therefore requires extra layer / provider wrapper packages. Custom provider registries are harder to use.
  • Message metadata is a lot harder to add, requires disabling default start/finish chunks and sending them separately.
  • ...

But of course, it has benefits. As far as I understood:

  • Can resume multi-step calls instead of repeating from scratch. Not sure what happens if an error occurs mid-step (e.g. network failure) - will probably run that step again, but what about the already sent chunks?!?
  • Tool calls are steps, so they can also be retried.
  • Hooks can simplify human-in-the-loop / tool approvals.
  • More stable environment for the loop controller.

Some ideas how similar features could be achieved in (a subclass of) ToolLoopAgent:

  • Read back the stream to reconstruct the message chunks so far and continue from there. Or more abstract: Needs a way to "preload" the loop with the previously recorded intermediate step results and continue with the next step.
  • AFAIU, calling a step function from another step has no special meaning, so no retries etc. That's only when they're called from a workflow function. Similar for hooks. Generally, ToolLoopAgent and the functions it calls are somehow orchestrating (like workflow functions), but they're far from just stitching steps together. That's why I think it makes sense to run them in a step function, but they could benefit from sub-steps. That needs further thoughts.
  • Of course, this assumes that the setup is still the same on the next retry, like all tools being defined in the same way, which seems to be guaranteed in the workflow VM. Then again, how big is the risk that this happens accidently and how big could the damage be?

@gr2m
Copy link
Copy Markdown
Collaborator Author

gr2m commented Mar 14, 2026

I want to try two different approaches in separate PRs against gr2m/durable-agent

  1. Try to refactor streamText() itself so that it works in a workflow context.
  2. Refactor streamText() to export lower-level orchestration code which then can be used by DurableAgent

I think 1. won't be possible for several reasons but I want to see how far I can get.

For the providers, as a start I want to re-export all first-party providers with the symbols needed for workflow step serialization/deserialization and the "use step" directive in doStream()

image

gr2m and others added 11 commits April 7, 2026 11:45
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.70

### Patch Changes

-   Updated dependencies [0694029]
    -   @ai-sdk/gateway@4.0.0-beta.39

## @ai-sdk/angular@3.0.0-beta.70

### Patch Changes

-   ai@7.0.0-beta.70

## @ai-sdk/gateway@4.0.0-beta.39

### Patch Changes

- 0694029: chore(provider/gateway): update gateway model settings files

## @ai-sdk/langchain@3.0.0-beta.70

### Patch Changes

-   ai@7.0.0-beta.70

## @ai-sdk/llamaindex@3.0.0-beta.70

### Patch Changes

-   ai@7.0.0-beta.70

## @ai-sdk/otel@1.0.0-beta.16

### Patch Changes

-   ai@7.0.0-beta.70

## @ai-sdk/react@4.0.0-beta.70

### Patch Changes

-   ai@7.0.0-beta.70

## @ai-sdk/rsc@3.0.0-beta.71

### Patch Changes

-   ai@7.0.0-beta.70

## @ai-sdk/svelte@5.0.0-beta.70

### Patch Changes

-   ai@7.0.0-beta.70

## @ai-sdk/vue@4.0.0-beta.70

### Patch Changes

-   ai@7.0.0-beta.70

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

#13839

streamText with the default text output called JSON.stringify on the
full accumulated text on every single streaming chunk that creates
increasingly large string copies per stream casuing memory issues

## Summary

- skip `JSON.stringify` when the partial output is already a string
- structured outputs still go through stringify as before since they
need serialization to compare.
- compare the text directly
- no extra full-string serialization per chunk

## Manual Verification

tried reproducing via 
<details>
<summary>repro</summary>

```ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { run } from '../../lib/run';

run(async () => {
  const result = streamText({
    model: openai.responses('gpt-4o-mini'),
    prompt:
      'Write an extremely detailed 5000-word essay about the history of computing. Include every detail you can.',
  });

  let chunks = 0;
  for await (const textPart of result.textStream) {
    chunks++;
    if (chunks % 100 === 0) {
      const mb = (process.memoryUsage().heapUsed / 1024 / 1024).toFixed(1);
      console.log(`chunk ${chunks} — heap: ${mb}MB`);
    }
  }

  console.log(`\nTotal chunks: ${chunks}`);
  console.log(
    `Final heap: ${(process.memoryUsage().heapUsed / 1024 / 1024).toFixed(1)}MB`,
  );
});
```
</details>

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

fixes #13839
## Background

follow up to the pr #13989

the ai sdk only emitted otel spans with traces beginning with `ai*` and
not `gen_ai*` thereby not aligning with the OpenTelemetry GenAI semantic
conventions (https://opentelemetry.io/docs/specs/semconv/gen-ai/)

## Summary

- introduced a new `GenAIOpenTelemetryIntegration()` that users can use
to emit traces that conform to the semantic convention
- helper functions added in `gen-ai-format-messages.ts` that allow
converting AI SDK internal types to OTel GenAI semantic conventions

## Manual Verification

verified by running some of the telemetry examples

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
Update workflow, otel, and gateway packages to align with the
experimental_context → context rename in core and fix optional
headers handling in gateway reranking model.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When a language model crosses a workflow step boundary, serializeModel
strips function-valued config like headers. This caused "x-api-key
header is required" errors because auth credentials were lost.

Fix: serializeModel now calls headers() at serialization time to
resolve the function into a plain key-value object. On deserialization,
deserializeModelConfig wraps the plain object back into a function so
model code can continue calling config.headers() as expected.

This approach works for all providers without per-provider changes —
only serializeModel and each model's WORKFLOW_DESERIALIZE are updated.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Conflicts:
#	.changeset/pre.json
#	examples/ai-e2e-next/package.json
#	examples/angular/package.json
#	examples/express/package.json
#	examples/fastify/package.json
#	examples/hono/package.json
#	examples/mcp/package.json
#	examples/nest/package.json
#	examples/next-agent/package.json
#	examples/next-fastapi/package.json
#	examples/next-google-vertex/package.json
#	examples/next-langchain/package.json
#	examples/next-openai-kasada-bot-protection/package.json
#	examples/next-openai-pages/package.json
#	examples/next-openai-telemetry-sentry/package.json
#	examples/next-openai-telemetry/package.json
#	examples/next-openai-upstash-rate-limits/package.json
#	examples/next/package.json
#	examples/node-http-server/package.json
#	examples/nuxt-openai/package.json
#	examples/sveltekit-openai/package.json
#	packages/ai/CHANGELOG.md
#	packages/ai/package.json
#	packages/angular/CHANGELOG.md
#	packages/angular/package.json
#	packages/anthropic/src/anthropic-messages-language-model.ts
#	packages/gateway/CHANGELOG.md
#	packages/gateway/package.json
#	packages/gateway/src/gateway-reranking-model.ts
#	packages/langchain/CHANGELOG.md
#	packages/langchain/package.json
#	packages/llamaindex/CHANGELOG.md
#	packages/llamaindex/package.json
#	packages/open-responses/CHANGELOG.md
#	packages/open-responses/package.json
#	packages/otel/CHANGELOG.md
#	packages/otel/package.json
#	packages/otel/src/gen-ai-open-telemetry-integration.test.ts
#	packages/react/CHANGELOG.md
#	packages/react/package.json
#	packages/rsc/CHANGELOG.md
#	packages/rsc/package.json
#	packages/rsc/tests/e2e/next-server/CHANGELOG.md
#	packages/svelte/CHANGELOG.md
#	packages/svelte/package.json
#	packages/vue/CHANGELOG.md
#	packages/vue/package.json
#	pnpm-lock.yaml
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Update workflow from 4.2.0-beta.71 to 4.2.0 and @workflow/vitest from
4.0.1-beta.8 to 4.0.1. Update next-workflow example to use npm package
instead of tarball URL.

Also unmarks 2 compat tests that now pass (onStart/onStepStart event
information) after the experimental_context → context rename.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
gr2m and others added 5 commits April 8, 2026 11:47
…#14229)

## Background

Provider tools (like `anthropic.tools.webSearch`, `webFetch`,
`codeExecution`) lose their identity when crossing workflow step
boundaries. `serializeToolSet()` converts all tools into plain function
tools with only `description` and `inputSchema`, stripping the `type:
'provider'`, `id`, and `args` fields.

This causes the Gateway to not recognize them as provider-executed
tools, leading to `GatewayInternalServerError: Unexpected value(s) for
the anthropic-beta header` when used with provider-specific features
like `contextManagement` and `speed`.

## Summary

Update `serializeToolSet` and `resolveSerializableTools` in
`serializable-schema.ts` to handle provider tools:

- **`SerializableToolDef`**: add optional `type`, `id`, and `args`
fields
- **`serializeToolSet`**: check `tool.type === 'provider'` and preserve
`type`, `id`, `args`
- **`resolveSerializableTools`**: reconstruct provider tools with their
identity intact, without Ajv wrapping (provider tools are executed
server-side)

| Field | Before | After |
|---|---|---|
| `type` | `"function"` | `"provider"` |
| `id` | *(stripped)* | `"anthropic.web_search_20250305"` |
| `args` | *(stripped)* | `{ "maxUses": 5, "allowedDomains": [...] }` |

## Manual Verification

Updated `examples/next-workflow/workflow/agent-chat.ts` to

```ts
import { anthropic } from '@ai-sdk/anthropic';
import { WorkflowAgent, type ModelCallStreamPart } from '@ai-sdk/workflow';
import { type UIMessage } from 'ai';
import { getWritable } from 'workflow';

export async function chat(messages: UIMessage[]) {
  'use workflow';

  const providerOptions = {
    anthropic: {
      thinking: { type: 'adaptive' },
      speed: 'fast',
      contextManagement: {
        edits: [
          {
            type: 'clear_tool_uses_20250919',
            trigger: { type: 'input_tokens', value: 80_000 },
            keep: { type: 'tool_uses', value: 5 },
            clearAtLeast: { type: 'input_tokens', value: 5000 },
            clearToolInputs: true,
          },
          {
            type: 'compact_20260112',
            trigger: { type: 'input_tokens', value: 100_000 },
            instructions:
              'Summarize the conversation concisely, preserving key decisions, tool results, and context.',
            pauseAfterCompaction: false,
          },
        ],
      },
    },
    gateway: {
      models: ['claude-sonnet-4-6', 'claude-haiku-4-5'],
      // zeroDataRetention: true,
      disallowPromptTraining: true,
    },
  };

  const anthropicTools = {
    webFetch: anthropic.tools.webFetch_20250910({
      maxUses: 5,
      allowedDomains: [
        'vercel.com',
        'nextjs.org',
        'ai-sdk.dev',
        'chat-sdk.dev',
        'useworkflow.dev',
      ],
    }),
    webSearch: anthropic.tools.webSearch_20250305({
      maxUses: 5,
      allowedDomains: [
        'vercel.com',
        'nextjs.org',
        'ai-sdk.dev',
        'chat-sdk.dev',
        'useworkflow.dev',
      ],
    }),
    codeExecution: anthropic.tools.codeExecution_20260120(),
  };

  const agent = new WorkflowAgent({
    model: 'anthropic/claude-opus-4.6',
    instructions: 'you are a helpful assistant.',
    tools: anthropicTools,
    providerOptions,
  });

  const result = await agent.stream({
    messages: modelMessages,
    writable: getWritable<ModelCallStreamPart>(),
  });

  return { messages: result.messages };
}
```

And send the following message to the `next-workflow` example

> What are the latest vercel news? Use webSearch to find out, and if you
find any links, use webFetch to get more details from those pages.

<img width="1624" height="1056" alt="image"
src="https://github.com/user-attachments/assets/b843c208-9899-4954-bc6f-fd732d6223bd"
/>

## Related Issues

Part of #12165

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…14084)

## Background

Phase 3 of the ToolLoopAgent parity plan (see #12165). WorkflowAgent
didn't support the `needsApproval` property on tools, which allows tools
to require approval before execution.

## Summary

- Check `needsApproval` (boolean or async function) before executing
each tool
- When approval is needed, pause the agent loop and return the
unresolved tool call in `result.toolCalls` without executing it (no
entry in `result.toolResults`)
- Uses the same pause mechanism as client-side tools (tools without
`execute`)
- The `needsApproval` function receives tool input, `toolCallId`,
`messages`, and `context` — matching the AI SDK's
`ToolNeedsApprovalFunction` signature
- Handle tool approval **resumption** in `WorkflowAgent.stream()`: when
input messages contain `tool-approval-response` parts, automatically
execute approved tools and create denial results, matching ToolLoopAgent
behavior
- Fix duplicate "approved — executing" UI message caused by
`createModelCallToUIChunkTransform` generating a new `messageId` on each
workflow run
- Convert 2 `it.fails()` tests to passing tests

## Manual Verification

- 76 tests pass, 5 expected fail
- Type check clean
- Build succeeds
- Tested approval flow end-to-end in `examples/next-workflow` — single
"approved — executing" message, no duplication

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

Part of #12165 (ToolLoopAgent parity plan, Phase 3)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…elCall rename

Rename Experimental_ModelCallStreamPart to Experimental_LanguageModelStreamPart
and experimental_streamModelCall to experimental_streamLanguageModelCall across
all workflow package files to match upstream renames in main.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will delete before merging

skills-lock.json Outdated
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete before merging

Comment on lines +12 to +13
# TODO: remove before merging #12165
- gr2m/durable-agent
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete before merging

felixarntz

This comment was marked as resolved.

gr2m and others added 5 commits April 9, 2026 14:04
The globalThis.AI_SDK_DEFAULT_PROVIDER accesses are already properly
typed via declare global in src/global.ts. Restore @ts-expect-error
for the experimental videoModel access (preferred over @ts-ignore).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
These files were added during development and should not be merged.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
experimental_output is deprecated in the AI SDK. Use the non-experimental
output parameter and property name instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
gr2m and others added 4 commits April 9, 2026 15:26
The internal entry point re-exports resolveLanguageModel but didn't
have the globalThis type augmentation in scope, causing DTS build
failures for AI_SDK_DEFAULT_PROVIDER.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/workflow related to WorkflowAgent or Vercel Workflow DevKit in general (useworkflow.dev) feature New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants