Skip to content

Conversation

serialx
Copy link
Contributor

@serialx serialx commented Sep 22, 2025

In the case of Anthropic Claude with extended thinking, this ensures that existing text content is properly converted to content array format before appending thinking blocks, preventing overwriting of original assistant message content.

This happens in a rare case where there is a (output text, thinking block, tool call) all in one response. Error message is like below:

litellm.BadRequestError: BedrockException - {"message":"The model returned the following errors: messages.1.content.49: `thinking` or `redacted_thinking` blocks in the latest assistant message cannot be modified. These blocks must remain as they were in the original response."}

The error is misleading because the actual reason is because we didn't provide the text content along side the thinking blocks.

This rarely happens. (0.5%) But in our agent evaluation suite, this happens when doing agents handoffs to a reflection style agent.

…locks to tool calls

In the case of Anthropic Claude with extended thinking, this ensures that existing text content is properly converted to content array format before appending thinking blocks, preventing overwriting of original assistant message content.

This happens in a rare case where there is a (output text, thinking block, tool call) all in one response. Error message is like below:

```
"litellm.BadRequestError: BedrockException - {\"message\":\"The model returned the following errors: messages.1.content.49: `thinking` or `redacted_thinking` blocks in the latest assistant message cannot be modified. These blocks must remain as they were in the original response.\"}"
```

The error is misleading because the actual reason is because we didn't provide the text content along side the thinking blocks.
@seratch seratch merged commit a425859 into openai:main Sep 22, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants