diff --git a/fern/chat/chat.mdx b/fern/chat/chat.mdx
deleted file mode 100644
index ac3394a95..000000000
--- a/fern/chat/chat.mdx
+++ /dev/null
@@ -1,501 +0,0 @@
----
-title: Chat API
-subtitle: Build text-based conversations with Vapi assistants using streaming and non-streaming chat
----
-
-## Overview
-
-The Chat API enables text-based conversations with Vapi assistants. Unlike voice calls, chats provide a synchronous or streaming text interface perfect for integrations with messaging platforms, web interfaces, or any text-based communication channel.
-
-Key features:
-- **Non-streaming responses** (default) for simple request-response patterns
-- **Streaming responses** for real-time, token-by-token output
-- **Context preservation** through sessions or previous chats
-- **OpenAI compatibility** through the `/chat/responses` endpoint
-
-## Key concepts
-
-- **`messages`** - Conversation history that provides context for the chat
-- **`input`** - The user's input to the chat (required)
-- **`output`** - The response generated by the assistant
-
-## Quick start
-
-
-
- Send a POST request to create a new chat conversation:
-
- ```bash
- curl -X POST https://api.vapi.ai/chat \
- -H "Authorization: Bearer YOUR_API_KEY" \
- -H "Content-Type: application/json" \
- -d '{
- "assistantId": "your-assistant-id",
- "input": "Hello, how can you help me today?"
- }'
- ```
-
-
-
- The API returns the assistant's response in the `output` field:
-
- ```json
- {
- "id": "chat_abc123",
- "assistantId": "your-assistant-id",
- "messages": [
- {
- "role": "user",
- "content": "Hello, how can you help me today?"
- }
- ],
- "output": [
- {
- "role": "assistant",
- "content": "Hello! I'm here to help. What would you like to know?"
- }
- ],
- "createdAt": "2024-01-15T09:30:00Z",
- "updatedAt": "2024-01-15T09:30:00Z"
- }
- ```
-
-
-
- Use the `previousChatId` to maintain context:
-
- ```bash
- curl -X POST https://api.vapi.ai/chat \
- -H "Authorization: Bearer YOUR_API_KEY" \
- -H "Content-Type: application/json" \
- -d '{
- "previousChatId": "chat_abc123",
- "input": "Tell me about your features"
- }'
- ```
-
-
-
-## Non-streaming chat
-
-Non-streaming chat returns the complete response after the assistant finishes processing. This is ideal for simple integrations where you don't need real-time output.
-
-### Basic example
-
-```javascript
-const response = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- assistantId: 'your-assistant-id',
- input: 'What is the weather like today?'
- })
-});
-
-const chat = await response.json();
-console.log(chat.output[0].content); // Assistant's response
-```
-
-### Response structure
-
-```javascript
-{
- "id": "chat_123456",
- "orgId": "org_789012",
- "assistantId": "assistant_345678",
- "name": "Weather Chat",
- "sessionId": "session_901234",
- "messages": [
- {
- "role": "user",
- "content": "What is the weather like today?"
- }
- ],
- "output": [
- {
- "role": "assistant",
- "content": "I'd be happy to help with weather information, but I'll need to know your location first. What city are you in?"
- }
- ],
- "createdAt": "2024-01-15T09:30:00Z",
- "updatedAt": "2024-01-15T09:30:01Z"
-}
-```
-
-### With custom assistant configuration
-
-```javascript
-const response = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- input: 'Help me plan a trip to Paris',
- assistant: {
- model: {
- provider: 'openai',
- model: 'gpt-4',
- temperature: 0.7,
- messages: [
- {
- role: 'system',
- content: 'You are a helpful travel assistant specializing in European destinations.'
- }
- ]
- },
- voice: {
- provider: 'azure',
- voiceId: 'andrew'
- }
- }
- })
-});
-```
-
-## Streaming chat
-
-Streaming chat provides real-time, token-by-token responses. Enable streaming by setting `stream: true` in your request (default is `false`).
-
-### Basic streaming example
-
-```javascript
-const response = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- assistantId: 'your-assistant-id',
- input: 'Write me a short story',
- stream: true
- })
-});
-
-const reader = response.body.getReader();
-const decoder = new TextDecoder();
-
-while (true) {
- const { done, value } = await reader.read();
- if (done) break;
-
- const chunk = decoder.decode(value);
- const lines = chunk.split('\n').filter(line => line.trim());
-
- for (const line of lines) {
- if (line.startsWith('data: ')) {
- const data = JSON.parse(line.slice(6));
- // Stream events have format: { id, path, delta }
- if (data.path && data.delta) {
- process.stdout.write(data.delta); // Print each token as it arrives
- }
- }
- }
-}
-```
-
-### Streaming response format
-
-When streaming is enabled, the response is sent as Server-Sent Events (SSE). Each event contains:
-
-```javascript
-{
- "id": "stream_123456",
- "path": "chat.output[0].content",
- "delta": "Hello"
-}
-```
-
-The `path` indicates where in the response structure the content is being appended, following the format `chat.output[{index}].content`.
-
-### Handling streaming events
-
-```javascript
-async function streamChat(input) {
- const response = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- assistantId: 'your-assistant-id',
- input: input,
- stream: true
- })
- });
-
- const reader = response.body.getReader();
- const decoder = new TextDecoder();
- let buffer = '';
- let fullContent = '';
-
- while (true) {
- const { done, value } = await reader.read();
- if (done) break;
-
- buffer += decoder.decode(value, { stream: true });
- const lines = buffer.split('\n');
- buffer = lines.pop(); // Keep incomplete line in buffer
-
- for (const line of lines) {
- if (line.startsWith('data: ')) {
- try {
- const event = JSON.parse(line.slice(6));
- if (event.path && event.delta) {
- // Extract content index from path like "chat.output[0].content"
- const match = event.path.match(/output\[(\d+)\]\.content/);
- if (match) {
- process.stdout.write(event.delta);
- fullContent += event.delta;
- }
- }
- } catch (e) {
- console.error('Failed to parse event:', e);
- }
- }
- }
- }
-
- console.log('\n\nFull response:', fullContent);
-}
-```
-
-## Context management
-
-Maintain conversation context across multiple chat interactions using sessions or previous chat references.
-
-### Using sessions
-
-Sessions allow multiple chats to share the same context:
-
-```javascript
-// Create a session
-const sessionResponse = await fetch('https://api.vapi.ai/session', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- assistantId: 'your-assistant-id'
- })
-});
-
-const session = await sessionResponse.json();
-
-// Use the session for multiple chats
-const chat1 = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- sessionId: session.id,
- input: 'My name is Alice'
- })
-});
-
-const chat1Data = await chat1.json();
-console.log(chat1Data.output[0].content); // "Nice to meet you, Alice!"
-
-const chat2 = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- sessionId: session.id,
- input: 'What is my name?' // Assistant will remember "Alice"
- })
-});
-
-const chat2Data = await chat2.json();
-console.log(chat2Data.output[0].content); // "Your name is Alice."
-```
-
-### Using previous chat
-
-Link chats together without creating a session:
-
-```javascript
-// First chat
-const firstChat = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- assistantId: 'your-assistant-id',
- input: 'I need help with Python programming'
- })
-});
-
-const firstChatData = await firstChat.json();
-
-// Continue conversation
-const secondChat = await fetch('https://api.vapi.ai/chat', {
- method: 'POST',
- headers: {
- 'Authorization': 'Bearer YOUR_API_KEY',
- 'Content-Type': 'application/json'
- },
- body: JSON.stringify({
- previousChatId: firstChatData.id,
- input: 'Show me how to create a list'
- })
-});
-```
-
-## OpenAI compatibility
-
-Vapi provides an OpenAI-compatible endpoint at `/chat/responses` that works with the OpenAI SDK. This allows you to use existing OpenAI client libraries with Vapi.
-
-### Using the OpenAI SDK
-
-```javascript
-import OpenAI from 'openai';
-
-const openai = new OpenAI({
- apiKey: 'YOUR_VAPI_API_KEY',
- baseURL: 'https://api.vapi.ai/chat'
-});
-
-// Create a streaming response
-const stream = await openai.responses.create({
- model: 'gpt-4o',
- input: 'Tell me a joke',
- stream: true,
- assistantId: 'your-assistant-id'
-});
-
-// Handle the stream
-for await (const event of stream) {
- if (event.type === 'response.output_text.delta') {
- process.stdout.write(event.delta);
- }
-}
-```
-
-### Non-streaming with OpenAI SDK
-
-```javascript
-const response = await openai.responses.create({
- model: 'gpt-4o',
- input: 'What is the capital of France?',
- stream: false,
- assistantId: 'your-assistant-id'
-});
-
-console.log(response.output[0].content[0].text);
-```
-
-### Maintaining context with OpenAI SDK
-
-```javascript
-// First request
-const response1 = await openai.responses.create({
- model: 'gpt-4o',
- input: 'My name is Sarah',
- stream: false,
- assistantId: 'your-assistant-id'
-});
-
-// Continue conversation using previous_response_id
-const response2 = await openai.responses.create({
- model: 'gpt-4o',
- input: 'What is my name?',
- previous_response_id: response1.id,
- stream: false,
- assistantId: 'your-assistant-id'
-});
-
-console.log(response2.output[0].content[0].text); // "Your name is Sarah"
-```
-
-### OpenAI compatibility notes
-
-- Use your Vapi API key as the OpenAI API key
-- Set the base URL to `https://api.vapi.ai/chat`
-- The `assistantId` parameter is required to specify which Vapi assistant to use
-- The `model` parameter is included for compatibility but the actual model is determined by your assistant configuration
-- Use `previous_response_id` instead of `previousChatId` for context continuation
-
-## Message types
-
-The Chat API supports various message types for both input and output:
-
-### User message
-```javascript
-{
- "role": "user",
- "content": "Hello, how are you?",
- "name": "Alice" // Optional
-}
-```
-
-### Assistant message
-```javascript
-{
- "role": "assistant",
- "content": "I'm doing well, thank you!",
- "tool_calls": [...], // Optional tool calls
- "refusal": null // Optional refusal message
-}
-```
-
-### System message
-```javascript
-{
- "role": "system",
- "content": "You are a helpful assistant specializing in technical support."
-}
-```
-
-### Tool message
-```javascript
-{
- "role": "tool",
- "content": "Weather data retrieved successfully",
- "tool_call_id": "call_123456"
-}
-```
-
-### Developer message
-```javascript
-{
- "role": "developer",
- "content": "Always be concise in your responses."
-}
-```
-
-## Best practices
-
-
-**Optimize for performance**
-- Reuse `sessionId` or `previousChatId` to maintain context efficiently
-- Use streaming for long responses to improve perceived performance
-- Set appropriate `temperature` values (0.0-1.0) based on your use case
-
-
-
-**Security considerations**
-- Never expose your API key in client-side code
-- Implement proper authentication in your backend
-- Validate and sanitize user inputs before sending to the API
-
-
-## Next steps
-
-- Explore the [API reference](/api-reference/chats/chat-controller-create-chat) for all available parameters
-- Learn about [assistants](/docs/assistants) to customize chat behavior
-- Implement [tools](/docs/tools) for advanced functionality
-- Set up [webhooks](/docs/webhooks) for real-time events
diff --git a/fern/chat/non-streaming.mdx b/fern/chat/non-streaming.mdx
new file mode 100644
index 000000000..492e3602f
--- /dev/null
+++ b/fern/chat/non-streaming.mdx
@@ -0,0 +1,326 @@
+---
+title: Non-streaming chat
+subtitle: Build reliable chat integrations with complete response patterns for batch processing and simple UIs
+slug: chat/non-streaming
+---
+
+## Overview
+
+Build a chat integration that receives complete responses after processing, perfect for batch processing, simple UIs, or when you need the full response before proceeding. Ideal for integrations where real-time display isn't essential.
+
+**What You'll Build:**
+* Simple request-response chat patterns with immediate complete responses
+* Session-based conversations that maintain context across multiple chats
+* Basic integration with predictable response timing
+
+## Prerequisites
+
+* Completed [Chat quickstart](/chat/quickstart) tutorial
+* Understanding of basic HTTP requests and JSON handling
+* Familiarity with JavaScript/TypeScript promises or async/await
+
+## Scenario
+
+We'll build a help desk system for "TechFlow" that processes support messages through text chat and maintains conversation history using sessions.
+
+---
+
+## 1. Basic Non-Streaming Implementation
+
+
+
+ Start with a basic non-streaming chat implementation:
+
+ ```bash title="Basic Non-Streaming Request"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id",
+ "input": "I need help resetting my password"
+ }'
+ ```
+
+
+ Non-streaming responses come back as complete JSON objects:
+
+ ```json title="Complete Chat Response"
+ {
+ "id": "chat_123456",
+ "orgId": "org_789012",
+ "assistantId": "assistant_345678",
+ "name": "Password Reset Help",
+ "sessionId": "session_901234",
+ "messages": [
+ {
+ "role": "user",
+ "content": "I need help resetting my password"
+ }
+ ],
+ "output": [
+ {
+ "role": "assistant",
+ "content": "I can help you reset your password. First, let me verify your account information..."
+ }
+ ],
+ "createdAt": "2024-01-15T09:30:00Z",
+ "updatedAt": "2024-01-15T09:30:01Z"
+ }
+ ```
+
+
+ Create a reusable function for non-streaming chat:
+
+ ```typescript title="non-streaming-chat.ts"
+ async function sendChatMessage(
+ message: string,
+ previousChatId?: string
+ ): Promise<{ chatId: string; response: string }> {
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ assistantId: 'your-assistant-id',
+ input: message,
+ ...(previousChatId && { previousChatId })
+ })
+ });
+
+ const chat = await response.json();
+ return {
+ chatId: chat.id,
+ response: chat.output[0].content
+ };
+ }
+ ```
+
+
+
+---
+
+## 2. Context Management with Sessions
+
+
+
+ Sessions allow multiple chats to share the same conversation context:
+
+ ```bash title="Create Session"
+ curl -X POST https://api.vapi.ai/session \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id"
+ }'
+ ```
+
+
+ Once you have a session ID, use it for related conversations:
+
+ ```bash title="First Message with Session"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "sessionId": "session_abc123",
+ "input": "My account is locked and I can't access the dashboard"
+ }'
+ ```
+
+ ```bash title="Follow-up in Same Session"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "sessionId": "session_abc123",
+ "input": "I tried the suggestions but still can't get in"
+ }'
+ ```
+
+
+ Build a session-aware chat manager:
+
+ ```typescript title="session-manager.ts"
+ async function createChatSession(assistantId: string): Promise {
+ const response = await fetch('https://api.vapi.ai/session', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({ assistantId })
+ });
+
+ const session = await response.json();
+ return session.id;
+ }
+
+ async function sendSessionMessage(
+ sessionId: string,
+ message: string
+ ): Promise {
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ sessionId: sessionId,
+ input: message
+ })
+ });
+
+ const chat = await response.json();
+ return chat.output[0].content;
+ }
+
+ // Usage example
+ const sessionId = await createChatSession('your-assistant-id');
+
+ const response1 = await sendSessionMessage(sessionId, "I need help with billing");
+ console.log('Response 1:', response1);
+
+ const response2 = await sendSessionMessage(sessionId, "Can you explain the charges?");
+ console.log('Response 2:', response2); // Will remember the billing context
+ ```
+
+
+
+---
+
+## 3. Using previousChatId for Context
+
+
+
+ Alternative to sessions - link chats directly:
+
+ ```typescript title="previous-chat-context.ts"
+ async function createConversation() {
+ let lastChatId: string | undefined;
+
+ async function sendMessage(input: string): Promise {
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ assistantId: 'your-assistant-id',
+ input: input,
+ ...(lastChatId && { previousChatId: lastChatId })
+ })
+ });
+
+ const chat = await response.json();
+ lastChatId = chat.id;
+ return chat.output[0].content;
+ }
+
+ return { sendMessage };
+ }
+
+ // Usage
+ const conversation = await createConversation();
+
+ const response1 = await conversation.sendMessage("Hello, I'm Alice");
+ console.log(response1);
+
+ const response2 = await conversation.sendMessage("What's my name?");
+ console.log(response2); // Should remember "Alice"
+ ```
+
+
+
+---
+
+## 4. Custom Assistant Configuration
+
+
+
+ Instead of pre-created assistants, define configuration per request:
+
+ ```bash title="Custom Assistant Request"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "input": "I need help with enterprise features",
+ "assistant": {
+ "model": {
+ "provider": "openai",
+ "model": "gpt-4o",
+ "temperature": 0.7,
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are a helpful technical support agent specializing in enterprise features."
+ }
+ ]
+ }
+ }
+ }'
+ ```
+
+
+ Build different chat handlers for different types of requests:
+
+ ```typescript title="specialized-handlers.ts"
+ async function createSpecializedChat(systemPrompt: string) {
+ return async function(userInput: string): Promise {
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ input: userInput,
+ assistant: {
+ model: {
+ provider: 'openai',
+ model: 'gpt-4o',
+ temperature: 0.3,
+ messages: [{ role: 'system', content: systemPrompt }]
+ }
+ }
+ })
+ });
+
+ const chat = await response.json();
+ return chat.output[0].content;
+ };
+ }
+
+ const technicalSupport = await createSpecializedChat(
+ "You are a technical support specialist. Ask clarifying questions and provide step-by-step troubleshooting."
+ );
+
+ const billingSupport = await createSpecializedChat(
+ "You are a billing support specialist. Be precise about billing terms and always verify account information."
+ );
+
+ // Usage
+ const techResponse = await technicalSupport("My API requests are returning 500 errors");
+ const billingResponse = await billingSupport("I was charged twice this month");
+ ```
+
+
+
+---
+
+## Next Steps
+
+Enhance your non-streaming chat system further:
+
+* **[Add streaming capabilities](/chat/streaming)** - Upgrade to real-time responses for better UX
+* **[OpenAI compatibility](/chat/openai-compatibility)** - Use familiar OpenAI SDK patterns
+* **[Integrate tools](/tools)** - Enable your assistant to call external APIs and databases
+* **[Add voice capabilities](/calls/outbound-calling)** - Extend your text chat to voice interactions
+
+
+Need help? Chat with the team on our [Discord](https://discord.com/invite/pUFNcf2WmH) or mention us on [X/Twitter](https://x.com/Vapi_AI).
+
diff --git a/fern/chat/openai-compatibility.mdx b/fern/chat/openai-compatibility.mdx
new file mode 100644
index 000000000..50de50ca8
--- /dev/null
+++ b/fern/chat/openai-compatibility.mdx
@@ -0,0 +1,539 @@
+---
+title: OpenAI compatibility
+subtitle: Seamlessly migrate existing OpenAI integrations to Vapi with zero code changes
+slug: chat/openai-compatibility
+---
+
+## Overview
+
+Migrate your existing OpenAI chat applications to Vapi without changing a single line of code. Perfect for teams already using OpenAI SDKs, third-party tools expecting OpenAI API format, or developers who want to leverage existing OpenAI workflows.
+
+**What You'll Build:**
+* Drop-in replacement for OpenAI chat endpoints using Vapi assistants
+* Migration path from OpenAI to Vapi with existing codebases
+* Integration with popular frameworks like LangChain and Vercel AI SDK
+* Production-ready server implementations with both streaming and non-streaming
+
+## Prerequisites
+
+* Completed [Chat quickstart](/chat/quickstart) tutorial
+* Existing OpenAI integration or familiarity with OpenAI SDK
+
+## Scenario
+
+We'll migrate "TechFlow's" existing OpenAI-powered customer support chat to use Vapi assistants, maintaining all existing functionality while gaining access to Vapi's advanced features like custom voices and tools.
+
+---
+
+## 1. Quick Migration Test
+
+
+
+ If you don't already have it, install the OpenAI SDK:
+
+
+ ```bash title="npm"
+ npm install openai
+ ```
+
+ ```bash title="yarn"
+ yarn add openai
+ ```
+
+ ```bash title="pnpm"
+ pnpm add openai
+ ```
+
+ ```bash title="bun"
+ bun add openai
+ ```
+
+
+
+ Use your existing OpenAI code with minimal changes:
+
+ ```bash title="Test OpenAI Compatibility"
+ curl -X POST https://api.vapi.ai/chat/responses \
+ -H "Authorization: Bearer YOUR_VAPI_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "model": "gpt-4o",
+ "input": "Hello, I need help with my account",
+ "stream": false,
+ "assistantId": "your-assistant-id"
+ }'
+ ```
+
+
+ The response follows OpenAI's structure with Vapi enhancements:
+
+ ```json title="OpenAI-Compatible Response"
+ {
+ "id": "response_abc123",
+ "object": "chat.response",
+ "created": 1642678392,
+ "model": "gpt-4o",
+ "output": [
+ {
+ "role": "assistant",
+ "content": [
+ {
+ "type": "text",
+ "text": "Hello! I'd be happy to help with your account. What specific issue are you experiencing?"
+ }
+ ]
+ }
+ ],
+ "usage": {
+ "prompt_tokens": 12,
+ "completion_tokens": 23,
+ "total_tokens": 35
+ }
+ }
+ ```
+
+
+
+---
+
+## 2. Migrate Existing OpenAI Code
+
+
+
+ Change only the base URL and API key in your existing code:
+
+ ```typescript title="Before (OpenAI)"
+ import OpenAI from 'openai';
+
+ const openai = new OpenAI({
+ apiKey: 'your-openai-api-key'
+ });
+
+ const response = await openai.chat.completions.create({
+ model: 'gpt-4o',
+ messages: [{ role: 'user', content: 'Hello!' }],
+ stream: true
+ });
+ ```
+
+ ### With Vapi (No Code Changes)
+
+ ```typescript title="After (Vapi)"
+ import OpenAI from 'openai';
+
+ const openai = new OpenAI({
+ apiKey: 'YOUR_VAPI_API_KEY',
+ baseURL: 'https://api.vapi.ai/chat',
+ });
+
+ const response = await openai.chat.completions.create({
+ model: 'gpt-4o',
+ messages: [{ role: 'user', content: 'Hello!' }],
+ stream: true
+ });
+ ```
+
+
+ Change `chat.completions.create` to `responses.create` and add `assistantId`:
+
+ ```typescript title="Before (OpenAI Chat Completions)"
+ const response = await openai.chat.completions.create({
+ model: 'gpt-4o',
+ messages: [
+ { role: 'user', content: 'What is the capital of France?' }
+ ],
+ stream: false
+ });
+
+ console.log(response.choices[0].message.content);
+ ```
+
+ ```typescript title="After (Vapi Compatibility)"
+ const response = await openai.responses.create({
+ model: 'gpt-4o',
+ input: 'What is the capital of France?',
+ stream: false,
+ assistantId: 'your-assistant-id'
+ });
+
+ console.log(response.output[0].content[0].text);
+ ```
+
+
+ Run your updated code to verify the migration works:
+
+ ```typescript title="migration-test.ts"
+ import OpenAI from 'openai';
+
+ const openai = new OpenAI({
+ apiKey: 'YOUR_VAPI_API_KEY',
+ baseURL: 'https://api.vapi.ai/chat'
+ });
+
+ async function testMigration() {
+ try {
+ const response = await openai.responses.create({
+ model: 'gpt-4o',
+ input: 'Hello, can you help me troubleshoot an API issue?',
+ stream: false,
+ assistantId: 'your-assistant-id'
+ });
+
+ console.log('Migration successful!');
+ console.log('Response:', response.output[0].content[0].text);
+ } catch (error) {
+ console.error('Migration test failed:', error);
+ }
+ }
+
+ testMigration();
+ ```
+
+
+
+---
+
+## 3. Implement Streaming with OpenAI SDK
+
+
+
+ Update your streaming code to use Vapi's streaming format:
+
+ ```bash title="Streaming via curl"
+ curl -X POST https://api.vapi.ai/chat/responses \
+ -H "Authorization: Bearer YOUR_VAPI_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "model": "gpt-4o",
+ "input": "Explain how machine learning works in detail",
+ "stream": true,
+ "assistantId": "your-assistant-id"
+ }'
+ ```
+
+
+ Adapt your existing streaming implementation:
+
+ ```typescript title="streaming-migration.ts"
+ async function streamWithVapi(userInput: string): Promise {
+ const stream = await openai.responses.create({
+ model: 'gpt-4o',
+ input: userInput,
+ stream: true,
+ assistantId: 'your-assistant-id'
+ });
+
+ let fullResponse = '';
+
+ const reader = stream.body?.getReader();
+ if (!reader) return fullResponse;
+
+ const decoder = new TextDecoder();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) break;
+
+ const chunk = decoder.decode(value);
+
+ // Parse and process SSE events
+ const lines = chunk.split('\n').filter(line => line.trim());
+ for (const line of lines) {
+ if (line.startsWith('data: ')) {
+ try {
+ const event = JSON.parse(line.slice(6));
+ if (event.path && event.delta) {
+ process.stdout.write(event.delta);
+ fullResponse += event.delta;
+ }
+ } catch (e) {
+ console.error('Invalid JSON line:', line);
+ continue;
+ }
+ }
+ }
+ }
+
+ console.log('\n\nComplete response received.');
+ return fullResponse;
+ }
+
+ streamWithVapi('Write a detailed explanation of REST APIs');
+ ```
+
+
+ Implement context management using Vapi's approach:
+
+ ```typescript title="context-management.ts"
+ function createContextualChatSession(apiKey: string, assistantId: string) {
+ const openai = new OpenAI({
+ apiKey: apiKey,
+ baseURL: 'https://api.vapi.ai/chat'
+ });
+ let lastChatId: string | null = null;
+
+ async function sendMessage(input: string, stream: boolean = false) {
+ const requestParams = {
+ model: 'gpt-4o',
+ input: input,
+ stream: stream,
+ assistantId: assistantId,
+ ...(lastChatId && { previousChatId: lastChatId })
+ };
+
+ const response = await openai.responses.create(requestParams);
+
+ if (!stream) {
+ lastChatId = response.id;
+ return response.output[0].content[0].text;
+ }
+
+ return response;
+ }
+
+ return { sendMessage };
+ }
+
+ // Usage example
+ const session = createContextualChatSession('YOUR_VAPI_API_KEY', 'your-assistant-id');
+
+ const response1 = await session.sendMessage("My name is Sarah and I'm having login issues");
+ console.log('Response 1:', response1);
+
+ const response2 = await session.sendMessage("What was my name again?");
+ console.log('Response 2:', response2); // Should remember "Sarah"
+ ```
+
+
+
+---
+
+## 4. Framework Integrations
+
+
+
+ Use Vapi with LangChain's OpenAI integration:
+
+ ```typescript title="langchain-integration.ts"
+ import { ChatOpenAI } from "langchain/chat_models/openai";
+ import { HumanMessage } from "langchain/schema";
+
+ const chat = new ChatOpenAI({
+ openAIApiKey: "YOUR_VAPI_API_KEY",
+ configuration: {
+ baseURL: "https://api.vapi.ai/chat"
+ },
+ modelName: "gpt-4o",
+ streaming: false
+ });
+
+ async function chatWithVapi(message: string, assistantId: string): Promise {
+ const response = await fetch('https://api.vapi.ai/chat/responses', {
+ method: 'POST',
+ headers: {
+ 'Authorization': `Bearer YOUR_VAPI_API_KEY`,
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ model: 'gpt-4o',
+ input: message,
+ assistantId: assistantId,
+ stream: false
+ })
+ });
+
+ const data = await response.json();
+ return data.output[0].content[0].text;
+ }
+
+ // Usage
+ const response = await chatWithVapi(
+ "What are the best practices for API design?",
+ "your-assistant-id"
+ );
+ console.log(response);
+ ```
+
+
+ Use Vapi with Vercel's AI SDK:
+
+ ```typescript title="vercel-ai-integration.ts"
+ import { openai } from '@ai-sdk/openai';
+ import { generateText, streamText } from 'ai';
+
+ const vapiOpenAI = openai({
+ apiKey: 'YOUR_VAPI_API_KEY',
+ baseURL: 'https://api.vapi.ai/chat'
+ });
+
+ // Non-streaming text generation
+ async function generateWithVapi(prompt: string, assistantId: string): Promise {
+ const response = await fetch('https://api.vapi.ai/chat/responses', {
+ method: 'POST',
+ headers: {
+ 'Authorization': `Bearer YOUR_VAPI_API_KEY`,
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ model: 'gpt-4o',
+ input: prompt,
+ assistantId: assistantId,
+ stream: false
+ })
+ });
+
+ const data = await response.json();
+ return data.output[0].content[0].text;
+ }
+
+ // Streaming implementation
+ async function streamWithVapi(prompt: string, assistantId: string): Promise {
+ const response = await fetch('https://api.vapi.ai/chat/responses', {
+ method: 'POST',
+ headers: {
+ 'Authorization': `Bearer YOUR_VAPI_API_KEY`,
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ model: 'gpt-4o',
+ input: prompt,
+ assistantId: assistantId,
+ stream: true
+ })
+ });
+
+ const reader = response.body?.getReader();
+ if (!reader) return;
+
+ const decoder = new TextDecoder();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) break;
+
+ const chunk = decoder.decode(value);
+
+ // Parse and process SSE events
+ const lines = chunk.split('\n').filter(line => line.trim());
+ for (const line of lines) {
+ if (line.startsWith('data: ')) {
+ try {
+ const event = JSON.parse(line.slice(6));
+ if (event.path && event.delta) {
+ process.stdout.write(event.delta);
+ }
+ } catch (e) {
+ console.error('Invalid JSON line:', line);
+ continue;
+ }
+ }
+ }
+ }
+ }
+
+ // Usage examples
+ const text = await generateWithVapi(
+ "Explain the benefits of microservices architecture",
+ "your-assistant-id"
+ );
+ console.log(text);
+ ```
+
+
+ Build a simple server that exposes Vapi through OpenAI-compatible endpoints:
+
+ ```typescript title="simple-server.ts"
+ import express from 'express';
+
+ const app = express();
+ app.use(express.json());
+
+ app.post('/v1/chat/completions', async (req, res) => {
+ const { messages, model, stream = false, assistant_id } = req.body;
+
+ if (!assistant_id) {
+ return res.status(400).json({
+ error: 'assistant_id is required for Vapi compatibility'
+ });
+ }
+
+ const lastMessage = messages[messages.length - 1];
+ const input = lastMessage.content;
+
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': `Bearer ${process.env.VAPI_API_KEY}`,
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ assistantId: assistant_id,
+ input: input,
+ stream: stream
+ })
+ });
+
+ if (stream) {
+ res.setHeader('Content-Type', 'text/event-stream');
+ res.setHeader('Cache-Control', 'no-cache');
+ res.setHeader('Connection', 'keep-alive');
+
+ const reader = response.body?.getReader();
+ if (!reader) {
+ return res.status(500).json({ error: 'Failed to get stream reader' });
+ }
+
+ const decoder = new TextDecoder();
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) {
+ res.write('data: [DONE]\n\n');
+ res.end();
+ break;
+ }
+
+ const chunk = decoder.decode(value);
+ res.write(chunk);
+ }
+ } else {
+ const chat = await response.json();
+ const openaiResponse = {
+ id: chat.id,
+ object: 'chat.completion',
+ created: Math.floor(Date.now() / 1000),
+ model: model || 'gpt-4o',
+ choices: [{
+ index: 0,
+ message: {
+ role: 'assistant',
+ content: chat.output[0].content
+ },
+ finish_reason: 'stop'
+ }]
+ };
+ res.json(openaiResponse);
+ }
+ });
+
+ app.listen(3000, () => {
+ console.log('Vapi-OpenAI compatibility server running on port 3000');
+ });
+ ```
+
+
+
+---
+
+## Next Steps
+
+Enhance your migrated system:
+
+* **[Explore Vapi-specific features](/chat/quickstart)** - Leverage advanced assistant capabilities
+* **[Add voice capabilities](/calls/outbound-calling)** - Extend beyond text to voice interactions
+* **[Integrate tools](/tools/custom-tools)** - Give your assistant access to external APIs
+* **[Optimize for streaming](/chat/streaming)** - Improve real-time user experience
+
+
+Need help? Chat with the team on our [Discord](https://discord.com/invite/pUFNcf2WmH) or mention us on [X/Twitter](https://x.com/Vapi_AI).
+
diff --git a/fern/chat/quickstart.mdx b/fern/chat/quickstart.mdx
new file mode 100644
index 000000000..8eaaa016b
--- /dev/null
+++ b/fern/chat/quickstart.mdx
@@ -0,0 +1,280 @@
+---
+title: Chat quickstart
+subtitle: Build your first text-based conversation with a Vapi assistant in 5 minutes
+slug: chat/quickstart
+---
+
+## Overview
+
+Build a customer service chat bot that can handle text-based conversations through your application. Perfect for adding AI chat to websites, mobile apps, or messaging platforms.
+
+**What You'll Build:**
+* A working chat integration that responds to user messages
+* Context-aware conversations that remember previous messages
+* Both one-shot and multi-turn conversation patterns
+
+**Agent Capabilities:**
+* Instant text responses without voice processing
+* Maintains conversation context across multiple messages
+* Compatible with existing OpenAI workflows
+
+## Prerequisites
+
+* A [Vapi account](https://dashboard.vapi.ai/)
+* An existing assistant or willingness to create one
+* Basic knowledge of making API requests
+
+## Scenario
+
+We'll create a customer support chat for "TechFlow", a software company that wants to handle common questions via text chat before escalating to human agents.
+
+---
+
+## 1. Get Your API Credentials
+
+
+
+ Go to [dashboard.vapi.ai](https://dashboard.vapi.ai) and log in to your account.
+
+
+ Click on your profile in the top right, then select `Vapi API Keys`.
+
+
+ Copy your Private API Key. You'll need this for all chat requests.
+
+
+ Keep this key secure - never expose it in client-side code.
+
+
+
+
+---
+
+## 2. Create or Select an Assistant
+
+
+
+ In your Vapi dashboard, click `Assistants` in the left sidebar.
+
+
+ - Click `Create Assistant` if you need a new one
+ - Select `Blank Template` as your starting point
+ - Name it `TechFlow Support`
+ - Set the first message to: `Hello! I'm here to help with TechFlow questions. What can I assist you with today?`
+
+
+ Update the system prompt to:
+
+ ```txt title="System Prompt" maxLines=8
+ You are a helpful customer support agent for TechFlow, a software company.
+
+ Your role:
+ - Answer common questions about our products
+ - Help troubleshoot basic issues
+ - Escalate complex problems to human agents
+
+ Keep responses concise and helpful. Always maintain a friendly, professional tone.
+ ```
+
+
+ After publishing, copy the Assistant ID from the URL or assistant details. You'll need this for API calls.
+
+
+
+---
+
+## 3. Send Your First Chat Message
+
+
+
+ Replace `YOUR_API_KEY` and `your-assistant-id` with your actual values:
+
+ ```bash title="First Chat Request"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id",
+ "input": "Hi, I need help with my TechFlow account"
+ }'
+ ```
+
+
+ You should receive a JSON response like:
+
+ ```json title="Chat Response"
+ {
+ "id": "chat_abc123",
+ "assistantId": "your-assistant-id",
+ "messages": [
+ {
+ "role": "user",
+ "content": "Hi, I need help with my TechFlow account"
+ }
+ ],
+ "output": [
+ {
+ "role": "assistant",
+ "content": "I'd be happy to help with your TechFlow account! What specific issue are you experiencing?"
+ }
+ ],
+ "createdAt": "2024-01-15T09:30:00Z",
+ "updatedAt": "2024-01-15T09:30:00Z"
+ }
+ ```
+
+
+
+---
+
+## 4. Build a Multi-Turn Conversation
+
+
+
+ Use the `previousChatId` from the first response to maintain context:
+
+ ```bash title="Follow-up Message"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id",
+ "previousChatId": "chat_abc123",
+ "input": "I forgot my password and can't log in"
+ }'
+ ```
+
+
+ Send another message to verify the assistant remembers the conversation:
+
+ ```bash title="Context Test"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id",
+ "previousChatId": "chat_abc123",
+ "input": "What was my original question?"
+ }'
+ ```
+
+
+
+---
+
+## 5. Integrate with TypeScript
+
+
+
+ Here's a TypeScript function you can use in your application:
+
+ ```typescript title="chat.ts"
+ interface ChatMessage {
+ role: 'user' | 'assistant';
+ content: string;
+ }
+
+ interface ChatApiResponse {
+ id: string;
+ assistantId: string;
+ messages: ChatMessage[];
+ output: ChatMessage[];
+ createdAt: string;
+ updatedAt: string;
+ orgId?: string;
+ sessionId?: string;
+ name?: string;
+ }
+
+ interface ChatResponse {
+ chatId: string;
+ response: string;
+ fullData: ChatApiResponse;
+ }
+
+ async function sendChatMessage(
+ message: string,
+ previousChatId?: string
+ ): Promise {
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ assistantId: 'your-assistant-id',
+ input: message,
+ ...(previousChatId && { previousChatId })
+ })
+ });
+
+ if (!response.ok) {
+ throw new Error(`HTTP error! status: ${response.status}`);
+ }
+
+ const chat: ChatApiResponse = await response.json();
+ return {
+ chatId: chat.id,
+ response: chat.output[0].content,
+ fullData: chat
+ };
+ }
+
+ // Usage example
+ const firstMessage = await sendChatMessage("Hello, I need help");
+ console.log(firstMessage.response);
+
+ const followUp = await sendChatMessage("Tell me more", firstMessage.chatId);
+ console.log(followUp.response);
+ ```
+
+
+ Run your TypeScript code to verify the chat integration works correctly.
+
+
+
+---
+
+## 6. Test Your Chat Bot
+
+
+
+ Try these test cases to ensure your chat bot works correctly:
+
+ ```bash title="Test Case 1: General Question"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id",
+ "input": "What are your business hours?"
+ }'
+ ```
+
+ ```bash title="Test Case 2: Technical Issue"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id",
+ "input": "My app keeps crashing when I try to export data"
+ }'
+ ```
+
+
+ Send follow-up messages using `previousChatId` to ensure context is maintained.
+
+
+
+## Next Steps
+
+Take your chat bot to the next level:
+
+* **[Streaming responses](/chat/streaming)** - Add real-time typing indicators and progressive responses
+* **[Non-streaming responses](/chat/non-streaming)** - Learn about sessions and complex conversation flows
+* **[OpenAI compatibility](/chat/openai-compatibility)** - Integrate with existing OpenAI workflows
+
+
+Need help? Chat with the team on our [Discord](https://discord.com/invite/pUFNcf2WmH) or mention us on [X/Twitter](https://x.com/Vapi_AI).
+
diff --git a/fern/chat/streaming.mdx b/fern/chat/streaming.mdx
new file mode 100644
index 000000000..a92505426
--- /dev/null
+++ b/fern/chat/streaming.mdx
@@ -0,0 +1,220 @@
+---
+title: Streaming chat
+subtitle: Build real-time chat experiences with token-by-token responses like ChatGPT
+slug: chat/streaming
+---
+
+## Overview
+
+Build a real-time chat interface that displays responses as they're generated, creating an engaging user experience similar to ChatGPT. Perfect for interactive applications where users expect immediate visual feedback.
+
+**What You'll Build:**
+* Real-time streaming chat interface with progressive text display
+* Context management across multiple messages
+* Basic TypeScript implementation ready for production use
+
+## Prerequisites
+
+* Completed [Chat quickstart](/chat/quickstart) tutorial
+* Basic knowledge of TypeScript/JavaScript and async/await
+
+## Scenario
+
+We'll enhance the TechFlow support chat from the quickstart to provide real-time streaming responses. Users will see text appear progressively as the AI generates it.
+
+---
+
+## 1. Enable Streaming in Your Requests
+
+
+
+ Modify your chat request to enable streaming by adding `"stream": true`:
+
+ ```bash title="Streaming Chat Request"
+ curl -X POST https://api.vapi.ai/chat \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "assistantId": "your-assistant-id",
+ "input": "Explain how to set up API authentication in detail",
+ "stream": true
+ }'
+ ```
+
+
+ Instead of a single JSON response, you'll receive Server-Sent Events (SSE):
+
+ ```typescript title="SSE Event Format"
+ // Example SSE events received:
+ data: {"id":"stream_123","path":"chat.output[0].content","delta":"Hello"}
+ data: {"id":"stream_123","path":"chat.output[0].content","delta":" there!"}
+ data: {"id":"stream_123","path":"chat.output[0].content","delta":" How can"}
+ data: {"id":"stream_123","path":"chat.output[0].content","delta":" I help?"}
+
+ // TypeScript interface for SSE events:
+ interface SSEEvent {
+ id: string;
+ path: string;
+ delta: string;
+ }
+ ```
+
+
+
+---
+
+## 2. Basic TypeScript Streaming Implementation
+
+
+
+ Here's a basic streaming implementation:
+
+ ```typescript title="streaming-chat.ts"
+ async function streamChatMessage(
+ message: string,
+ previousChatId?: string
+ ): Promise {
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ assistantId: 'your-assistant-id',
+ input: message,
+ stream: true,
+ ...(previousChatId && { previousChatId })
+ })
+ });
+
+ const reader = response.body?.getReader();
+ if (!reader) throw new Error('No reader available');
+
+ const decoder = new TextDecoder();
+ let fullResponse = '';
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) break;
+
+ const chunk = decoder.decode(value);
+ const lines = chunk.split('\n').filter(line => line.trim());
+
+ for (const line of lines) {
+ if (line.startsWith('data: ')) {
+ const data = JSON.parse(line.slice(6));
+ if (data.path && data.delta) {
+ fullResponse += data.delta;
+ process.stdout.write(data.delta);
+ }
+ }
+ }
+ }
+
+ return fullResponse;
+ }
+ ```
+
+
+ Try it out:
+
+ ```typescript title="Test Streaming"
+ const response = await streamChatMessage("Explain API rate limiting in detail");
+ console.log('\nComplete response:', response);
+ ```
+
+
+
+---
+
+## 3. Streaming with Context Management
+
+
+
+ Maintain context across multiple streaming messages:
+
+ ```typescript title="context-streaming.ts"
+ async function createStreamingConversation() {
+ let lastChatId: string | undefined;
+
+ async function sendMessage(input: string): Promise {
+ const response = await fetch('https://api.vapi.ai/chat', {
+ method: 'POST',
+ headers: {
+ 'Authorization': 'Bearer YOUR_API_KEY',
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ assistantId: 'your-assistant-id',
+ input: input,
+ stream: true,
+ ...(lastChatId && { previousChatId: lastChatId })
+ })
+ });
+
+ const reader = response.body?.getReader();
+ if (!reader) throw new Error('No reader available');
+
+ const decoder = new TextDecoder();
+ let fullContent = '';
+ let currentChatId: string | undefined;
+
+ while (true) {
+ const { done, value } = await reader.read();
+ if (done) break;
+
+ const chunk = decoder.decode(value);
+ const lines = chunk.split('\n').filter(line => line.trim());
+
+ for (const line of lines) {
+ if (line.startsWith('data: ')) {
+ const event = JSON.parse(line.slice(6));
+
+ if (event.id && !currentChatId) {
+ currentChatId = event.id;
+ }
+
+ if (event.path && event.delta) {
+ fullContent += event.delta;
+ process.stdout.write(event.delta);
+ }
+ }
+ }
+ }
+
+ if (currentChatId) {
+ lastChatId = currentChatId;
+ }
+
+ return fullContent;
+ }
+
+ return { sendMessage };
+ }
+ ```
+
+
+ ```typescript title="Test Context"
+ const conversation = await createStreamingConversation();
+
+ await conversation.sendMessage("My name is Alice");
+ console.log('\n---');
+ await conversation.sendMessage("What's my name?"); // Should remember Alice
+ ```
+
+
+
+---
+
+## Next Steps
+
+Enhance your streaming chat further:
+
+* **[OpenAI compatibility](/chat/openai-compatibility)** - Use OpenAI SDK for streaming with familiar syntax
+* **[Non-streaming patterns](/chat/non-streaming)** - Learn about sessions and complex conversation management
+* **[Add tools](/tools)** - Enable your assistant to call external APIs while streaming
+
+
+Need help? Chat with the team on our [Discord](https://discord.com/invite/pUFNcf2WmH) or mention us on [X/Twitter](https://x.com/Vapi_AI).
+
diff --git a/fern/docs.yml b/fern/docs.yml
index 76b3d3bf4..a5a98472b 100644
--- a/fern/docs.yml
+++ b/fern/docs.yml
@@ -322,9 +322,18 @@ navigation:
- section: Chat
contents:
- - page: Chat
- path: chat/chat.mdx
- icon: fa-light fa-comments
+ - page: Quickstart
+ path: chat/quickstart.mdx
+ icon: fa-light fa-bolt-lightning
+ - page: Streaming
+ path: chat/streaming.mdx
+ icon: fa-light fa-stream
+ - page: Non-streaming
+ path: chat/non-streaming.mdx
+ icon: fa-light fa-message
+ - page: OpenAI compatibility
+ path: chat/openai-compatibility.mdx
+ icon: fa-light fa-puzzle-piece
- section: Webhooks
collapsed: true