Replies: 1 comment
-
Great news! The Here's an example of how it now works: from langchain_aws import ChatBedrock
llm = ChatBedrock(model_id='anthropic.claude-3-haiku-20240307-v1:0')
llm.invoke('Hi!').response_metadata The output now includes the stop_reason: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Currently, Bedrock (Anthropic) responses lack the stop_reason attribute found in Anthropic responses. This makes it difficult to determine if the response was cut off due to reaching the max_token limit, leading to truncated responses.
Motivation
When using models like Claude3, responses often exceed the token limit. Adding the stop_reason attribute would allow for better detection and handling of these cases, improving error management and user experience.
Reference:
For more details, refer to the response metadata documentation.
Proposal (If applicable)
Include the stop_reason attribute in Bedrock (Anthropic) responses to indicate if the response was stopped due to reaching the max_token limit.
Beta Was this translation helpful? Give feedback.
All reactions