diff --git a/api-reference/assistant/create-assistant-message.mdx b/api-reference/assistant/create-assistant-message.mdx index b2dd38479..83e5784af 100644 --- a/api-reference/assistant/create-assistant-message.mdx +++ b/api-reference/assistant/create-assistant-message.mdx @@ -2,6 +2,69 @@ openapi: POST /assistant/{domain}/message --- +The AI Discovery Assistant API provides intelligent, context-aware responses based on your documentation. The API returns responses as a **streaming text response**, allowing you to display answers progressively as they're generated. + +## Streaming response + +This endpoint returns a **streaming response** using Server-Sent Events (SSE). The response body contains a readable stream that you must process chunk by chunk to receive the complete answer. + +### Complete example + +Here's a complete example showing how to send a message and handle the streaming response: + +```javascript +const domain = process.env.MINTLIFY_DOMAIN!; +const apiKey = process.env.MINTLIFY_API_KEY!; +const userMessage = process.argv.slice(2).join(' ') || 'How do I get started?'; + +const response = await fetch(`https://api-dsc.mintlify.com/v1/assistant/${domain}/message`, { + method: 'POST', + headers: { + 'Authorization': `Bearer ${apiKey}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + fp: 'user-fingerprint-' + Date.now(), + threadId: 'my-thread-id', + messages: [{ + id: 'msg-' + Date.now(), + role: 'user', + content: userMessage, + parts: [{ type: 'text', text: userMessage }] + }], + retrievalPageSize: 5, + filter: null, // { version: "string", language: "string" } + }), +}); + +// Handle the streaming response +const reader = response.body!.getReader(); +const decoder = new TextDecoder(); + +while (true) { + const { done, value } = await reader.read(); + if (done) break; + process.stdout.write(decoder.decode(value, { stream: true })); +} +``` + +### Key parameters + +- **`fp`** - A unique fingerprint to identify the user session +- **`threadId`** - Identifier for the conversation thread to maintain context +- **`messages`** - Array of message objects with `id`, `role`, `content`, and `parts` +- **`retrievalPageSize`** - Number of documentation chunks to retrieve for context (default: 5) +- **`filter`** - Optional filter to limit search to specific versions or languages + +### Handling the stream + +The response is a readable stream that must be processed using the Streams API: + +1. Get a reader from `response.body.getReader()` +2. Create a `TextDecoder` to convert bytes to text +3. Read chunks in a loop until `done` is true +4. Decode each chunk and process the text + ## Rate limits The assistant API has the following limits: @@ -10,8 +73,19 @@ The assistant API has the following limits: - 10,000 requests per Mintlify organization per hour - 10,000 requests per IP per day -## Suggested usage +## Frontend integration + +For React applications, use the [useChat hook from ai-sdk](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat#usechat) to send requests and handle streaming responses automatically. -For best results, use the [useChat hook from ai-sdk](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat#usechat) to send requests and handle responses. +You can set `fp`, `threadId`, and `filter` in the `body` field of the options parameter passed to the hook: -You can set `fp`, `threadId`, and `filter` in the `body` field of the options parameter passed to the hook. \ No newline at end of file +```javascript +const { messages, input, handleInputChange, handleSubmit } = useChat({ + api: `/api/chat`, + body: { + fp: 'user-fingerprint-123', + threadId: 'conversation-thread-1', + filter: { version: 'v2', language: 'javascript' } + } +}); +``` \ No newline at end of file