-
Notifications
You must be signed in to change notification settings - Fork 342
Description
Before creating a new issue, please confirm:
- I have searched for duplicate or closed issues and discussions.
- I have tried disabling all browser extensions or using a different browser
- I have tried deleting the node_modules folder and reinstalling my dependencies
- I have read the guide for submitting bug reports.
On which framework/platform are you having an issue?
React
Which UI component?
Other
How is your app built?
Vite
What browsers are you seeing the problem on?
Chrome
Which region are you seeing the problem in?
eu-west-2
Please describe your bug.
When using useAIConversation hook with streaming responses, the assistant's message content suddenly shrinks and disappears in the UI mid-stream. The message is actively being streamed and growing in size, then unexpectedly it shrinks, making the streaming response disappear in the UI.
What's the expected behaviour?
The assistant's message should continue streaming and growing until the response is complete. The message content should never shrink and disappear in the UI during streaming.
Help us reproduce the bug!
Backend Setup
- Create an
ai.conversation()with:- AI Model: Claude 3.7 Sonnet (or similar)
- Custom tools configured (e.g., data retrieval tools that return JSON)
- Custom response components registered via
responseComponentsprop aiContextcallback that provides data from tool results to the AI
Example backend schema structure:
Chat: a.conversation({
aiModel: a.ai.model("Claude 3.7 Sonnet"),
systemPrompt: "...",
tools: [
a.ai.dataTool({
name: "fetchData",
description: "Fetches data from database",
query: a.ref("fetchDataQuery"),
}),
],
})Frontend Setup
- Use
useAIConversationhook with custom response components and aiContext:
const [{ data: { messages } }, handleSendMessage] = useAIConversation("Chat", { id: conversationId });
// aiContext callback that provides tool result data
const aiContext = useCallback(() => {
return {
latestData: toolResultData // Data from previous tool call
};
}, [toolResultData]);
const responseComponents = {
CustomCard: {
component: SomeCustomComponent,
description: "Displays custom data",
props: { /* ... */ }
}
};
<AIConversation
messages={messages}
aiContext={aiContext}
responseComponents={responseComponents}
// ... other props
/>Reproduction Steps
- Start a conversation and send an initial greeting message
- Send a message that triggers a tool call to fetch data (e.g., "Show me all my records")
- The AI will call the tool
- The tool returns data
- The AI responds with a custom response component showing the data
- This data is now in
aiContext
- Important: Send one or more subsequent messages asking the AI to analyze or compare the data from the tool call (e.g., "How does item A compare to item B?", "Which one is better?")
- These messages should require long responses (200+ words)
- These messages should use the
aiContextdata from the previous tool call
- The bug may not occur on the immediate next message after the tool call - keep sending follow-up questions about the same data
- Typically on the 2nd, 3rd, or 4th message after the tool call, when the AI is streaming a long response that references the
aiContextdata, the response will suddenly shrink or disappear mid-stream
Key factors that seem to trigger the bug:
- A tool call that populates
aiContextwith data - Subsequent messages that require the AI to use that
aiContextdata - Long streaming responses (500+ tokens)
- Not necessarily the immediate next message - may happen on 2nd-4th subsequent message
- More likely when response includes reasoning/comparison using the context data
Expected Behavior
The assistant's message should continue streaming and growing until the response is complete. The message content should never shrink or be replaced with an earlier version during streaming.
Actual Behavior
- Message streams normally, content growing: e.g., 1188 → 1225 → 1238 → ... → 1513 characters
- Suddenly at some point during streaming, content shrinks: 1513 → 1229 characters (loss of 284 characters)
- The message count remains the same (e.g., 4 messages), but the assistant's response that is being streamed completely disappears from the UI.
- The timing varies (sometimes 2nd message, sometimes 3rd or 4th), but typically occurs on messages that involve tool calls or longer responses.
Code Snippet
// Note: This is a simplified/anonymized version of the actual implementation
// Key patterns demonstrated:
// 1. useAIConversation with custom response components
// 2. aiContext callback providing tool result data (array of items for comparison)
// 3. Message stabilization to prevent unnecessary re-renders
// 4. Wrapper components that capture data when rendered
// 5. The bug occurs when subsequent messages use aiContext data in long streaming responses
import { generateClient } from "aws-amplify/api";
import { createAIHooks, AIConversation } from "@aws-amplify/ui-react-ai";
import { useCallback, useRef, useState, useMemo, useEffect, memo } from "react";
import type { Schema } from "../amplify/data/resource";
const client = generateClient<Schema>({ authMode: "userPool" });
const { useAIConversation } = createAIHooks(client);
// Example type for data items returned by tool
interface DataItem {
itemId: string;
name: string;
value: number;
currency: string;
category: string;
details: any[];
}
export default function Chat() {
const { conversationId } = useParams<{ conversationId: string }>();
return (
<ChatConversation key={conversationId} conversationId={conversationId} />
);
}
// Wrapper component that captures multiple data items from tool results
const MultipleDataCardWithCapture = memo(
function MultipleDataCardWithCapture({
dataCollectionId,
onDataItemsRendered,
...props
}: {
dataCollectionId: string;
onDataItemsRendered: (items: DataItem[]) => void;
}) {
return (
<MultipleDataCard
dataCollectionId={dataCollectionId}
onDataLoaded={onDataItemsRendered}
{...props}
/>
);
},
(prevProps, nextProps) => {
return (
prevProps.dataCollectionId === nextProps.dataCollectionId &&
prevProps.onDataItemsRendered === nextProps.onDataItemsRendered
);
}
);
// Wrapper component that captures single data item
const SingleDataCardWithCapture = memo(
function SingleDataCardWithCapture({
itemId,
name,
value,
category,
details,
currency,
onDataItemRendered,
...props
}: {
itemId: string;
name?: string;
value?: number;
category?: string;
details?: any[];
currency?: string;
onDataItemRendered: (item: DataItem) => void;
}) {
// When the component mounts or updates, capture the single data item
useEffect(() => {
const item = {
itemId,
name: name!,
value: value!,
category: category!,
details: details!,
currency: currency || "USD",
} as DataItem;
// Update context to contain only this single item
onDataItemRendered(item);
}, [itemId, name, value, category, details, currency, onDataItemRendered]);
return (
<SingleDataCard
itemId={itemId}
name={name}
value={value}
category={category}
details={details}
currency={currency || "USD"}
{...props}
/>
);
},
(prevProps, nextProps) => {
return (
prevProps.itemId === nextProps.itemId &&
prevProps.name === nextProps.name &&
prevProps.value === nextProps.value &&
prevProps.category === nextProps.category &&
prevProps.currency === nextProps.currency &&
prevProps.onDataItemRendered === nextProps.onDataItemRendered
);
}
);
const ChatConversation = memo(function ChatConversation({
conversationId,
}: {
conversationId?: string;
}) {
const [chatName, setChatName] = useState<string | null>(null);
const [latestDataItems, setLatestDataItems] = useState<DataItem[]>([]);
// Use a ref to store latest data items for aiContext to avoid recreating callback
const latestDataItemsRef = useRef<DataItem[]>([]);
const [
{
isLoading,
data: { messages, conversation },
},
handleSendMessage,
] = useAIConversation("Chat", { id: conversationId });
// Stabilize messages array to prevent unnecessary re-renders
// The messages reference changes even when content is the same, causing render loops
// We compare stringified content and only update when it actually changes
const stableMessagesRef = useRef(messages);
const previousStringifiedRef = useRef(JSON.stringify(messages));
const currentStringified = JSON.stringify(messages);
const contentChanged = currentStringified !== previousStringifiedRef.current;
if (contentChanged) {
const prevLen = previousStringifiedRef.current.length;
const currLen = currentStringified.length;
console.log(
"[stableMessages] Messages content changed, creating new stable reference",
JSON.stringify({
prevLength: prevLen,
currLength: currLen,
lengthDiff: currLen - prevLen,
messageCount: messages.length,
})
);
stableMessagesRef.current = messages;
previousStringifiedRef.current = currentStringified;
}
const stableMessages = stableMessagesRef.current;
useEffect(() => {
// Update ref whenever latestDataItems changes
latestDataItemsRef.current = latestDataItems;
}, [latestDataItems]);
const handleNewMessage = useCallback(
(message: SendMesageParameters) => {
// Generate name for first user message if not already named
const shouldGenerateName =
!conversation?.name && messages.length === 0;
handleSendMessage(message);
if (shouldGenerateName) {
client.generations
.ChatNamer({
content: message.content
.map((content) => ("text" in content ? (content.text ?? "") : ""))
.join(""),
})
.then((res) => {
if (res.data?.name && conversation?.id) {
client.conversations.Chat.update({
id: conversation?.id,
name: res.data.name,
});
setChatName(res.data.name);
}
});
}
},
[conversation, handleSendMessage, messages.length]
);
// Create a stable callback that never changes
const onDataItemsRenderedCallback = useCallback((items: DataItem[]) => {
setLatestDataItems(items);
}, []);
// Create a stable callback for single data item rendering
const onDataItemRenderedCallback = useCallback((item: DataItem) => {
// When a single item is shown, update context to contain only that item
setLatestDataItems([item]);
}, []);
// Create a stable wrapper component that uses the stable callback
const MultipleDataCardWrapper = useMemo(() => {
return function Wrapper(props: { dataCollectionId: string }) {
return (
<MultipleDataCardWithCapture
{...props}
onDataItemsRendered={onDataItemsRenderedCallback}
/>
);
};
}, [onDataItemsRenderedCallback]);
// Create a stable wrapper component for single data card
const SingleDataCardWrapper = useMemo(() => {
return function Wrapper(props: {
itemId: string;
name?: string;
value?: number;
category?: string;
details?: any[];
currency?: string;
}) {
return (
<SingleDataCardWithCapture
{...props}
onDataItemRendered={onDataItemRenderedCallback}
/>
);
};
}, [onDataItemRenderedCallback]);
// Create a stable aiContext callback that uses the ref
// This callback never changes, preventing AIConversation from re-rendering
const aiContextCallback = useCallback(() => {
const items = latestDataItemsRef.current;
const value = {
latestDataItems:
items.length > 0
? items.map((item) => ({
itemId: item.itemId,
name: item.name,
value: item.value,
currency: item.currency,
category: item.category,
details: item.details,
}))
: undefined,
};
console.log(
"[aiContextCallback] Called, returning:",
JSON.stringify(value)
);
return value;
}, []); // Empty dependencies - callback never changes
// Memoize messageRenderer to prevent recreation on every render
const messageRenderer = useMemo(() => {
return {
text: ({ text }: { text: string }) => (
<ReactMarkdown>{text}</ReactMarkdown>
),
};
}, []);
// Memoize responseComponents to prevent recreation on every render
const responseComponents = useMemo(() => {
return {
DataRecord: {
component: DataRecordCard,
description: "Used to display a data record to the user",
props: {
recordId: {
type: "string" as const,
required: true,
description: "The unique ID for the record",
},
reference: {
type: "string" as const,
required: true,
description: "The reference for the record",
},
recordType: {
type: "string" as const,
required: true,
description: "The type of the record",
},
status: {
type: "string" as const,
required: true,
description: "The status of the record",
},
metadata: {
type: "object" as const,
required: true,
description: "Metadata about the record",
},
numberOfItems: {
type: "number" as const,
description: "The number of items in the record",
},
},
},
SingleDataCard: {
component: SingleDataCardWrapper,
description: "Used to display a single data item to the user",
props: {
itemId: {
type: "string" as const,
required: true,
description: "The unique ID for the item",
},
name: {
type: "string" as const,
required: true,
description: "The name of the item",
},
value: {
type: "number" as const,
required: true,
description: "The value amount of the item",
},
category: {
type: "string" as const,
required: true,
description: "The category of the item",
},
details: {
type: "array" as const,
required: true,
description: "The details included in the item",
},
currency: {
type: "string" as const,
required: true,
description: "The currency of the item value",
},
},
},
MultipleDataCard: {
component: MultipleDataCardWrapper,
description:
"Used to display multiple data items for a collection by fetching them client-side",
props: {
dataCollectionId: {
type: "string" as const,
required: true,
description: "The collection ID to fetch and display items for",
},
},
},
};
}, [MultipleDataCardWrapper, SingleDataCardWrapper]);
const avatars = useMemo(() => {
return {
ai: {
avatar: <Avatar src="/img/png/avatar.png" />,
username: undefined,
},
user: {
avatar: <></>,
username: undefined,
},
};
}, []);
const welcomeMessage = useMemo(() => {
return !conversationId ? (
<Flex padding="1.5rem 1rem 0rem" direction="column" alignItems="center">
<Card className="welcome-text">
<Text>Welcome! I'm here to help you with your data.</Text>
</Card>
</Flex>
) : undefined;
}, [conversationId]);
return (
<Layout pageName="Chat">
<Flex alignItems="center" justifyContent="center" padding={"0 2rem"}>
{!conversationId && chatName && (
<Text fontSize="large" textAlign="center">
{chatName}
</Text>
)}
{conversationId && (conversation?.name || !isLoading) && (
<Text fontSize="large" textAlign="center">
{conversation?.name ?? "Untitled new chat"}
</Text>
)}
</Flex>
<div
className={
stableMessages.length !== 0 || isLoading ? "has-messages" : ""
}
>
<AIConversation
aiContext={aiContextCallback}
welcomeMessage={welcomeMessage}
messages={stableMessages}
variant="bubble"
isLoading={isLoading}
handleSendMessage={handleNewMessage}
messageRenderer={messageRenderer}
responseComponents={responseComponents}
avatars={avatars}
/>
</div>
</Layout>
);
});
Console log output
Additional information and screenshots
Environment
Package Versions:
{
"@aws-amplify/backend": "1.16.1",
"@aws-amplify/backend-cli": "1.8.0",
"aws-amplify": "6.15.8",
"@aws-amplify/ui-react": "6.13.1",
"@aws-amplify/ui-react-ai": "1.5.0"
}Runtime:
- Node.js: 20.19.5
- React: 19.2.0
- Browser: Chrome/Edge (latest)
AWS Configuration:
- Region: eu-west-2
- AI Model: Claude 3.7 Sonnet
- Lambda Runtime: nodejs:20.v88
Analysis
Based on our investigation:
- Not a Lambda timeout issue - CloudWatch logs confirm all Lambda invocations complete successfully in ~6-7 seconds
- The bug is in the frontend - The
useAIConversationhook is receiving shrunk/stale message data from somewhere - Data source issue - The raw
messagesarray returned byuseAIConversationcontains the shrunk data, suggesting an issue with:- AppSync real-time subscriptions
- DynamoDB data fetching/caching
- Hook's internal state management during streaming
Impact
- Users see responses disappear mid-stream, creating a broken UX
- Makes the AI conversation feature unreliable for production use
- Issue is reproducible but timing varies based on response complexity
Workarounds Attempted
- ✅ Updated to latest package versions (6.15.8, 6.13.1) - Issue persists
- ✅ Implemented message stabilization logic to prevent unnecessary re-renders - Issue persists (proves it's not a React render issue)
- ✅ Verified backend Lambda functions work correctly - Confirmed working
Requested Action
Please investigate the useAIConversation hook's internal state management during streaming, particularly:
- How messages are updated from AppSync/DynamoDB during active streaming
- Whether there's a race condition where older message versions overwrite newer ones
- The real-time subscription mechanism for conversation message updates
Additional Notes
- This appears to be related to but distinct from issue fix(ui): update types for ColorModeOverride and BreakpointOverride to allow strings #2356 (Lambda timeout), as our Lambdas complete successfully
- The issue may be in the
@aws-amplify/ui-react-aipackage's conversation hook implementation - The bug seems specifically related to the interaction between:
- Tool calls that populate data
aiContextproviding that data to subsequent messages- Long streaming responses using that context data
- I can create a minimal reproduction repository if needed - please let me know if that would be helpful
- Happy to provide additional logs, test cases, or answer any questions about the setup