Skip to content

fix(ai): skip passing invalid JSON inputs to response messages#14280

Merged
aayush-kapoor merged 2 commits intomainfrom
aayush/invalid-tool-err
Apr 9, 2026
Merged

fix(ai): skip passing invalid JSON inputs to response messages#14280
aayush-kapoor merged 2 commits intomainfrom
aayush/invalid-tool-err

Conversation

@aayush-kapoor
Copy link
Copy Markdown
Collaborator

Background

if a model returns an invalid input for a tool call, the AISDK properly attaches it as a tool-error and sends it back to the model.
however, when sending the message history back, the tool-input field still refers the broken json input and thus breaks any future model response (observed in Anthropic as : invalid_request_error )

#13892

Summary

when building the response messages, if a tool cal input is invalid + is not a valid JSON, we pass an empty { } back as tool input so that model response doesn't break.

Manual Verification

verified by running the repro below

repro
import { anthropic } from '@ai-sdk/anthropic';
import { generateText, isStepCount, tool } from 'ai';
import { MockLanguageModelV3 } from 'ai/test';
import { z } from 'zod';
import { run } from '../../lib/run';

run(async () => {
  // Turn 1: Mock LLM returns a tool call with invalid JSON args.
  const result1 = await generateText({
    model: new MockLanguageModelV3({
      doGenerate: async () => ({
        warnings: [],
        usage: {
          inputTokens: {
            total: 10,
            noCache: 10,
            cacheRead: undefined,
            cacheWrite: undefined,
          },
          outputTokens: {
            total: 20,
            text: 20,
            reasoning: undefined,
          },
        },
        finishReason: { raw: undefined, unified: 'tool-calls' },
        content: [
          {
            type: 'tool-call',
            toolCallType: 'function',
            toolCallId: 'call-1',
            toolName: 'weather',
            input: `{ city: San Francisco, }`,
          },
        ],
      }),
    }),
    tools: {
      weather: tool({
        inputSchema: z.object({ city: z.string() }),
        execute: async ({ city }) => `Sunny in ${city}`,
      }),
    },
    prompt: 'What is the weather in San Francisco?',
    stopWhen: isStepCount(1),
  });


  // Turn 2: Send the (potentially poisoned) messages to real Anthropic API.
  console.log('Turn 2: round-trip to Anthropic');

  try {
    const result2 = await generateText({
      model: anthropic('claude-sonnet-4-20250514'),
      messages: [
        { role: 'user', content: 'What is the weather in San Francisco?' },
        ...result1.response.messages,
      ],
      tools: {
        weather: tool({
          inputSchema: z.object({ city: z.string() }),
          execute: async ({ city }) => `Sunny in ${city}`,
        }),
      },
      stopWhen: isStepCount(4),
    });

    console.log(`Anthropic responded: "${result2.text.slice(0, 200)}"`);
    console.log(
      '\nRequest body:',
      JSON.stringify(result2.request.body, null, 2),
    );
    console.log(
      '\nResponse body:',
      JSON.stringify(result2.response, null, 2),
    );
  } catch (error: any) {
    console.error('Anthropic rejected the request!');
    console.error(`  Status: ${error.statusCode ?? 'unknown'}`);
    console.error(`  Message: ${JSON.stringify(error, null, 2)}`);
  }
});

before: we get an API error Input should be a valid dictionary
after: model response with weather

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Related Issues

fixes #13892

@tigent tigent bot added ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label bug Something isn't working as documented provider/anthropic Issues related to the @ai-sdk/anthropic provider reproduction provided labels Apr 9, 2026
@aayush-kapoor aayush-kapoor added the backport Admins only: add this label to a pull request in order to backport it to the prior version label Apr 9, 2026
@aayush-kapoor aayush-kapoor merged commit 2add429 into main Apr 9, 2026
21 checks passed
@aayush-kapoor aayush-kapoor deleted the aayush/invalid-tool-err branch April 9, 2026 19:32
vercel-ai-sdk bot pushed a commit that referenced this pull request Apr 9, 2026
## Background

if a model returns an invalid input for a tool call, the AISDK properly
attaches it as a tool-error and sends it back to the model.
however, when sending the message history back, the tool-input field
still refers the broken json input and thus breaks any future model
response (observed in Anthropic as : `invalid_request_error `)

#13892

## Summary

when building the response messages, if a tool cal input is invalid + is
not a valid JSON, we pass an empty `{ }` back as tool input so that
model response doesn't break.

## Manual Verification

verified by running the repro below

<details>
<summary> repro </summary>

```ts
import { anthropic } from '@ai-sdk/anthropic';
import { generateText, isStepCount, tool } from 'ai';
import { MockLanguageModelV3 } from 'ai/test';
import { z } from 'zod';
import { run } from '../../lib/run';

run(async () => {
  // Turn 1: Mock LLM returns a tool call with invalid JSON args.
  const result1 = await generateText({
    model: new MockLanguageModelV3({
      doGenerate: async () => ({
        warnings: [],
        usage: {
          inputTokens: {
            total: 10,
            noCache: 10,
            cacheRead: undefined,
            cacheWrite: undefined,
          },
          outputTokens: {
            total: 20,
            text: 20,
            reasoning: undefined,
          },
        },
        finishReason: { raw: undefined, unified: 'tool-calls' },
        content: [
          {
            type: 'tool-call',
            toolCallType: 'function',
            toolCallId: 'call-1',
            toolName: 'weather',
            input: `{ city: San Francisco, }`,
          },
        ],
      }),
    }),
    tools: {
      weather: tool({
        inputSchema: z.object({ city: z.string() }),
        execute: async ({ city }) => `Sunny in ${city}`,
      }),
    },
    prompt: 'What is the weather in San Francisco?',
    stopWhen: isStepCount(1),
  });


  // Turn 2: Send the (potentially poisoned) messages to real Anthropic API.
  console.log('Turn 2: round-trip to Anthropic');

  try {
    const result2 = await generateText({
      model: anthropic('claude-sonnet-4-20250514'),
      messages: [
        { role: 'user', content: 'What is the weather in San Francisco?' },
        ...result1.response.messages,
      ],
      tools: {
        weather: tool({
          inputSchema: z.object({ city: z.string() }),
          execute: async ({ city }) => `Sunny in ${city}`,
        }),
      },
      stopWhen: isStepCount(4),
    });

    console.log(`Anthropic responded: "${result2.text.slice(0, 200)}"`);
    console.log(
      '\nRequest body:',
      JSON.stringify(result2.request.body, null, 2),
    );
    console.log(
      '\nResponse body:',
      JSON.stringify(result2.response, null, 2),
    );
  } catch (error: any) {
    console.error('Anthropic rejected the request!');
    console.error(`  Status: ${error.statusCode ?? 'unknown'}`);
    console.error(`  Message: ${JSON.stringify(error, null, 2)}`);
  }
});

```
</details>

before: we get an API error `Input should be a valid dictionary`
after: model response with weather

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

fixes #13892
@vercel-ai-sdk vercel-ai-sdk bot removed the backport Admins only: add this label to a pull request in order to backport it to the prior version label Apr 9, 2026
@vercel-ai-sdk
Copy link
Copy Markdown
Contributor

vercel-ai-sdk bot commented Apr 9, 2026

✅ Backport PR created: #14281

vercel-ai-sdk bot added a commit that referenced this pull request Apr 9, 2026
…ges (#14281)

This is an automated backport of #14280 to the release-v6.0 branch. FYI
@aayush-kapoor

Co-authored-by: Aayush Kapoor <83492835+aayush-kapoor@users.noreply.github.com>
@vercel-ai-sdk
Copy link
Copy Markdown
Contributor

vercel-ai-sdk bot commented Apr 9, 2026

🚀 Published in:

Package Version
ai 7.0.0-beta.77
@ai-sdk/angular 3.0.0-beta.77
@ai-sdk/langchain 3.0.0-beta.77
@ai-sdk/llamaindex 3.0.0-beta.77
@ai-sdk/otel 1.0.0-beta.23
@ai-sdk/react 4.0.0-beta.77
@ai-sdk/rsc 3.0.0-beta.78
@ai-sdk/svelte 5.0.0-beta.77
@ai-sdk/vue 4.0.0-beta.77

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label bug Something isn't working as documented provider/anthropic Issues related to the @ai-sdk/anthropic provider reproduction provided

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Handle invalid JSON tool inputs

2 participants