Skip to content

fix(ai): fix providerExecuted tool approvals being passed to language model twice#14289

Open
felixarntz wants to merge 2 commits intomainfrom
fa/fix-double-provider-executed-tool-approvals
Open

fix(ai): fix providerExecuted tool approvals being passed to language model twice#14289
felixarntz wants to merge 2 commits intomainfrom
fa/fix-double-provider-executed-tool-approvals

Conversation

@felixarntz
Copy link
Copy Markdown
Collaborator

@felixarntz felixarntz commented Apr 9, 2026

Background

When a UI message history contains an approved providerExecuted tool invocation and the server does the standard pattern:

const modelMessages = await convertToModelMessages(messages);
return streamText({ model, messages: modelMessages });

the provider receives the same tool-approval-response twice for the same approvalId in one tool message. This can cause downstream providers to reject the prompt as invalid (one approval request must produce exactly one response).

Summary

collectToolApprovals finds approval responses by reading the last tool message of initialMessages. Those responses are already present in initialMessages because convertToModelMessages placed them there. Both generateText and streamText then redundantly pushed the same responses into responseMessages/initialResponseMessages. When the step computed stepInputMessages = [...initialMessages, ...responseMessages], the approval appeared twice, and convertToLanguageModelPrompt surfaced both copies inside a single merged tool message.

  • Removed the redundant providerExecutedToolApprovals push in generateText
  • Removed the same block in streamText
  • Updated the four existing snapshots that already documented the bug (approved + denied cases for each function)

Manual Verification

In examples/ai-e2e-next, navigate to /chat/test-openai-responses-mcp-approval and send "Shorten the link https://ai-sdk.dev/". After approving the MCP tool call, the conversation should complete successfully.

To compare before/after, add a console.log of the prompt passed to the model in the mock or in the route handler. In generateText/streamText, you can log stepInputMessages (the variable at the top of the do loop / streamStep body) immediately before the language model call and confirm the tool message contains exactly one tool-approval-response entry after the fix.

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

@tigent tigent bot added ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. bug Something isn't working as documented labels Apr 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. bug Something isn't working as documented

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant