Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 77 additions & 3 deletions api-reference/assistant/create-assistant-message.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,69 @@
openapi: POST /assistant/{domain}/message
---

The AI Discovery Assistant API provides intelligent, context-aware responses based on your documentation. The API returns responses as a **streaming text response**, allowing you to display answers progressively as they're generated.

## Streaming response

This endpoint returns a **streaming response** using Server-Sent Events (SSE). The response body contains a readable stream that you must process chunk by chunk to receive the complete answer.

### Complete example

Here's a complete example showing how to send a message and handle the streaming response:

```javascript
const domain = process.env.MINTLIFY_DOMAIN!;
const apiKey = process.env.MINTLIFY_API_KEY!;
const userMessage = process.argv.slice(2).join(' ') || 'How do I get started?';

const response = await fetch(`https://api-dsc.mintlify.com/v1/assistant/${domain}/message`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
fp: 'user-fingerprint-' + Date.now(),
threadId: 'my-thread-id',
messages: [{
id: 'msg-' + Date.now(),
role: 'user',
content: userMessage,
parts: [{ type: 'text', text: userMessage }]
}],
retrievalPageSize: 5,
filter: null, // { version: "string", language: "string" }
}),
});

// Handle the streaming response
const reader = response.body!.getReader();
const decoder = new TextDecoder();

while (true) {
const { done, value } = await reader.read();
if (done) break;
process.stdout.write(decoder.decode(value, { stream: true }));
}
```

### Key parameters

- **`fp`** - A unique fingerprint to identify the user session
- **`threadId`** - Identifier for the conversation thread to maintain context
- **`messages`** - Array of message objects with `id`, `role`, `content`, and `parts`
- **`retrievalPageSize`** - Number of documentation chunks to retrieve for context (default: 5)
- **`filter`** - Optional filter to limit search to specific versions or languages

### Handling the stream

The response is a readable stream that must be processed using the Streams API:

1. Get a reader from `response.body.getReader()`
2. Create a `TextDecoder` to convert bytes to text
3. Read chunks in a loop until `done` is true
4. Decode each chunk and process the text

## Rate limits

The assistant API has the following limits:
Expand All @@ -10,8 +73,19 @@ The assistant API has the following limits:
- 10,000 requests per Mintlify organization per hour
- 10,000 requests per IP per day

## Suggested usage
## Frontend integration

For React applications, use the [useChat hook from ai-sdk](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat#usechat) to send requests and handle streaming responses automatically.

For best results, use the [useChat hook from ai-sdk](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat#usechat) to send requests and handle responses.
You can set `fp`, `threadId`, and `filter` in the `body` field of the options parameter passed to the hook:

You can set `fp`, `threadId`, and `filter` in the `body` field of the options parameter passed to the hook.
```javascript
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: `/api/chat`,
body: {
fp: 'user-fingerprint-123',
threadId: 'conversation-thread-1',
filter: { version: 'v2', language: 'javascript' }
}
});
```