Skip to content

Commit ddc66cc

Browse files
Add documentation for chat processing modes and typing indicators
This change documents the new chat processing features added in PR #685: - Chat processing modes (sequential vs batch) - Typing-aware batching with configurable timeouts - New onInputChange handler for typing indicators - Configuration properties: chatProcessingMode, chatIdleTimeout, chatTypingTimeout Includes: - API reference updates for AIChatAgent class - Detailed examples for each processing mode - Client-side React integration examples - Guidance on when to use each mode 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 3f88a70 commit ddc66cc

File tree

1 file changed

+145
-0
lines changed

1 file changed

+145
-0
lines changed

src/content/docs/agents/api-reference/agents-api.mdx

Lines changed: 145 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -801,6 +801,17 @@ class AIChatAgent<Env = unknown, State = unknown> extends Agent<Env, State> {
801801
// Array of chat messages for the current conversation
802802
messages: Message[];
803803

804+
// Chat processing mode: "sequential" (default) or "batch"
805+
chatProcessingMode: "sequential" | "batch";
806+
807+
// Idle timeout in milliseconds for batch mode (default: 5000ms)
808+
// How long to wait after a message is sent before processing
809+
chatIdleTimeout: number;
810+
811+
// Typing timeout in milliseconds for batch mode (default: 1500ms)
812+
// How long to wait after user stops typing before processing
813+
chatTypingTimeout: number;
814+
804815
// Handle incoming chat messages and generate a response
805816
// onFinish is called when the response is complete
806817
async onChatMessage(
@@ -865,6 +876,138 @@ class CustomerSupportAgent extends AIChatAgent<Env> {
865876

866877
</TypeScriptExample>
867878

879+
#### Chat Processing Modes
880+
881+
`AIChatAgent` supports two different modes for handling multiple incoming messages:
882+
883+
- **`"sequential"`** (default): Process each message one-by-one. Each message gets its own response. Messages are queued and processed in order, waiting for each response to fully complete before the next.
884+
- **`"batch"`**: Combine multiple rapid messages into one response. Uses debounce timing and optional typing indicators to batch messages together, which is better for conversational UX where users send multiple short messages in quick succession.
885+
886+
##### Sequential Mode
887+
888+
Sequential mode is the default behavior. Each message is processed independently and receives its own response:
889+
890+
<TypeScriptExample>
891+
892+
```ts
893+
import { AIChatAgent } from "agents/ai-chat-agent";
894+
895+
class MyAgent extends AIChatAgent<Env> {
896+
// Sequential mode is the default - no configuration needed
897+
chatProcessingMode = "sequential"; // Optional: explicitly set sequential mode
898+
899+
async onChatMessage(onFinish) {
900+
// Each message will be processed one at a time
901+
// User sends: "Hello"
902+
// Agent responds: "Hi there!"
903+
// User sends: "How are you?"
904+
// Agent responds: "I'm doing well, thanks for asking!"
905+
}
906+
}
907+
```
908+
909+
</TypeScriptExample>
910+
911+
##### Batch Mode
912+
913+
Batch mode combines multiple rapid messages into a single response. This is useful when users type multiple short messages in quick succession:
914+
915+
<TypeScriptExample>
916+
917+
```ts
918+
import { AIChatAgent } from "agents/ai-chat-agent";
919+
920+
class MyAgent extends AIChatAgent<Env> {
921+
// Enable batch mode
922+
chatProcessingMode = "batch";
923+
924+
// Optional: Configure batching timeout (default: 5000ms)
925+
chatIdleTimeout = 3000; // 3 seconds
926+
927+
async onChatMessage(onFinish) {
928+
// Multiple messages will be batched together
929+
// User sends: "Hello"
930+
// User sends: "How are you?"
931+
// User sends: "What can you help me with?"
932+
// Agent responds once to all three messages combined
933+
}
934+
}
935+
```
936+
937+
</TypeScriptExample>
938+
939+
##### Typing-Aware Batching
940+
941+
For even better UX, you can use typing indicators to intelligently delay responses until the user stops typing. This provides two-phase timing:
942+
943+
- **`chatIdleTimeout`**: How long to wait for the user to start typing after sending a message (default: 5000ms)
944+
- **`chatTypingTimeout`**: How long to wait after the user stops typing before processing (default: 1500ms)
945+
946+
<TypeScriptExample>
947+
948+
```ts
949+
import { AIChatAgent } from "agents/ai-chat-agent";
950+
951+
class MyAgent extends AIChatAgent<Env> {
952+
chatProcessingMode = "batch";
953+
chatIdleTimeout = 5000; // 5 seconds to start typing
954+
chatTypingTimeout = 1500; // 1.5 seconds after typing stops
955+
956+
async onChatMessage(onFinish) {
957+
// Agent waits for user to finish typing before responding
958+
// This creates a natural conversational flow
959+
}
960+
}
961+
```
962+
963+
</TypeScriptExample>
964+
965+
To enable typing indicators on the client side, use the `onInputChange` handler from `useAgentChat`:
966+
967+
<TypeScriptExample>
968+
969+
```tsx
970+
import { useAgentChat } from "agents/ai-react";
971+
import { useAgent } from "agents/react";
972+
import { useState } from "react";
973+
974+
function ChatInterface() {
975+
const agent = useAgent({ agent: "my-agent", name: "user-123" });
976+
977+
const {
978+
messages,
979+
sendMessage,
980+
onInputChange // New: typing indicator handler
981+
} = useAgentChat({ agent });
982+
983+
const [input, setInput] = useState("");
984+
985+
const handleInputChange = (e) => {
986+
setInput(e.target.value);
987+
// Send typing indicator to agent
988+
onInputChange(e);
989+
};
990+
991+
return (
992+
<div>
993+
<input
994+
value={input}
995+
onChange={handleInputChange}
996+
placeholder="Type your message..."
997+
/>
998+
</div>
999+
);
1000+
}
1001+
```
1002+
1003+
</TypeScriptExample>
1004+
1005+
**When to use each mode:**
1006+
1007+
- Use **sequential mode** when each message should be treated independently, such as commands, queries, or when you want to provide feedback for every message
1008+
- Use **batch mode** when users tend to send multiple related messages quickly, such as in conversational chat, brainstorming sessions, or when users are providing multi-part information
1009+
- Use **typing-aware batching** for the most natural conversational experience, where the agent waits for the user to finish their complete thought before responding
1010+
8681011
### Chat Agent React API
8691012

8701013
#### useAgentChat
@@ -905,6 +1048,8 @@ function useAgentChat(options: UseAgentChatOptions): {
9051048
setInput: React.Dispatch<React.SetStateAction<string>>;
9061049
// Handle input changes
9071050
handleInputChange: (e: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>) => void;
1051+
// Send typing indicators to the agent (for batch mode with typing-aware batching)
1052+
onInputChange: (e: React.ChangeEvent<HTMLInputElement | HTMLTextAreaElement>) => void;
9081053
// Submit the current input
9091054
handleSubmit: (event?: { preventDefault?: () => void }, chatRequestOptions?: any) => void;
9101055
// Additional metadata

0 commit comments

Comments
 (0)