Skip to content

Commit e79fe2a

Browse files
tpayetclaude
andcommitted
Improve chat API documentation clarity
Based on customer support feedback, this commit enhances the chat completions documentation with: - Added link to index conversation settings for better content optimization - Clarified that PATCH /chats/{workspace}/settings automatically creates workspaces - Added explicit privacy section stating no conversation data is stored - Improved workspace explanation in getting started guide - Added conversation management example showing stateless nature - Made documentation more welcoming and easier to understand These changes address common customer questions and make the complex chat feature more approachable. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent be63fbc commit e79fe2a

File tree

2 files changed

+104
-1
lines changed

2 files changed

+104
-1
lines changed

guides/ai/getting_started_with_chat.mdx

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,24 @@ The chat completions feature is experimental and must be enabled before use. See
1313
## Prerequisites
1414

1515
Before starting, ensure you have:
16+
1617
- Meilisearch instance running (v1.15.1 or later)
1718
- An API key from an LLM provider (OpenAI, Azure OpenAI, Mistral, Gemini, or access to a vLLM server)
1819
- At least one index with searchable content
1920
- The chat completions experimental feature enabled
2021

2122
## Quick start
2223

24+
### Understanding workspaces
25+
26+
Think of workspaces as different "assistants" you can create for various purposes. Each workspace can have its own personality (system prompt) and capabilities. The best part? **Workspaces are created automatically** when you configure them – no separate creation step needed!
27+
28+
For example:
29+
30+
- `customer-support` - A helpful assistant for customer queries
31+
- `product-search` - An expert at finding the perfect product
32+
- `docs-helper` - A technical assistant for documentation
33+
2334
### Enable the chat completions feature
2435

2536
First, enable the chat completions experimental feature:
@@ -143,6 +154,7 @@ Workspaces allow you to create isolated chat configurations for different use ca
143154
- **Documentation**: Tune for technical Q&A
144155

145156
Each workspace maintains its own:
157+
146158
- LLM provider configuration
147159
- System prompt
148160

@@ -267,6 +279,70 @@ except Exception as error:
267279
268280
</CodeGroup>
269281
282+
## Managing conversations
283+
284+
Since Meilisearch keeps your data private and doesn't store conversations, you'll need to manage conversation history in your application. Here's a simple approach:
285+
286+
<CodeGroup>
287+
288+
```javascript JavaScript
289+
// Store conversation history in your app
290+
const conversation = [];
291+
292+
// Add user message
293+
conversation.push({ role: 'user', content: 'What is Meilisearch?' });
294+
295+
// Get response and add to history
296+
const response = await client.chat.completions.create({
297+
model: 'gpt-3.5-turbo',
298+
messages: conversation,
299+
stream: true,
300+
});
301+
302+
// Add assistant response to history
303+
let assistantMessage = '';
304+
for await (const chunk of response) {
305+
assistantMessage += chunk.choices[0]?.delta?.content || '';
306+
}
307+
conversation.push({ role: 'assistant', content: assistantMessage });
308+
309+
// Use the full conversation for follow-up questions
310+
conversation.push({ role: 'user', content: 'Can it handle typos?' });
311+
// ... continue the conversation
312+
```
313+
314+
```python Python
315+
# Store conversation history in your app
316+
conversation = []
317+
318+
# Add user message
319+
conversation.append({"role": "user", "content": "What is Meilisearch?"})
320+
321+
# Get response and add to history
322+
response = client.chat.completions.create(
323+
model="gpt-3.5-turbo",
324+
messages=conversation,
325+
stream=True,
326+
)
327+
328+
# Add assistant response to history
329+
assistant_message = ""
330+
for chunk in response:
331+
if chunk.choices[0].delta.content is not None:
332+
assistant_message += chunk.choices[0].delta.content
333+
conversation.append({"role": "assistant", "content": assistant_message})
334+
335+
# Use the full conversation for follow-up questions
336+
conversation.append({"role": "user", "content": "Can it handle typos?"})
337+
# ... continue the conversation
338+
```
339+
340+
</CodeGroup>
341+
342+
<Tip>
343+
Remember: Each request is independent, so always send the full conversation history if you want the AI to remember previous exchanges.
344+
</Tip>
345+
270346
## Next steps
271347
272348
- Explore [advanced chat API features](/reference/api/chats)

reference/api/chats.mdx

Lines changed: 28 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,10 @@ import { RouteHighlighter } from '/snippets/route_highlighter.mdx';
88

99
The `/chats` route enables AI-powered conversational search by integrating Large Language Models (LLMs) with your Meilisearch data. This feature allows users to ask questions in natural language and receive contextual answers based on your indexed content.
1010

11+
<Tip>
12+
To optimize how your content is presented to the LLM, configure the [conversation settings for each index](/reference/api/settings#conversation). This allows you to customize descriptions, document templates, and search parameters for better AI responses.
13+
</Tip>
14+
1115
<Note>
1216
This is an experimental feature. Use the Meilisearch Cloud UI or the experimental features endpoint to activate it:
1317

@@ -19,6 +23,7 @@ curl \
1923
"chatCompletions": true
2024
}'
2125
```
26+
2227
</Note>
2328

2429
## Chat completions workspace object
@@ -39,6 +44,10 @@ curl \
3944

4045
Configure the LLM provider and settings for a chat workspace.
4146

47+
<Note>
48+
If the specified workspace doesn't exist, this endpoint will automatically create it for you. No need to explicitly create workspaces beforehand!
49+
</Note>
50+
4251
```json
4352
{
4453
"source": "openAi",
@@ -82,7 +91,6 @@ Configure the LLM provider and settings for a chat workspace.
8291
| **`searchQParam`** | String | A prompt to explain what the `q` parameter of the search function does and how to use it |
8392
| **`searchIndexUidParam`** | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it |
8493

85-
8694
### Request body
8795

8896
```json
@@ -391,6 +399,19 @@ curl \
391399
}
392400
```
393401
402+
## Privacy and data storage
403+
404+
<Capsule intent="note">
405+
🔒 **Your conversations are private**: Meilisearch does not store any conversation history or context between requests. Each chat completion request is stateless and independent. Any conversation continuity must be managed by your application.
406+
</Capsule>
407+
408+
This design ensures:
409+
410+
- Complete privacy of user conversations
411+
- No data retention of questions or answers
412+
- Full control over conversation history in your application
413+
- Compliance with data protection regulations
414+
394415
## Authentication
395416
396417
The chat feature integrates with Meilisearch's authentication system:
@@ -549,11 +570,13 @@ This tool reports real-time progress of internal search operations. When declare
549570
**Purpose**: Provides transparency about search operations and reduces perceived latency by showing users what's happening behind the scenes.
550571
551572
**Arguments**:
573+
552574
- `call_id`: Unique identifier to track the search operation
553575
- `function_name`: Name of the internal function being executed (e.g., "_meiliSearchInIndex")
554576
- `function_parameters`: JSON-encoded string containing search parameters like `q` (query) and `index_uid`
555577
556578
**Example Response**:
579+
557580
```json
558581
{
559582
"function": {
@@ -570,12 +593,14 @@ Since the `/chats/{workspace}/chat/completions` endpoint is stateless, this tool
570593
**Purpose**: Maintains conversation context for better response quality in subsequent requests by preserving tool calls and results.
571594
572595
**Arguments**:
596+
573597
- `role`: Message author role ("user" or "assistant")
574598
- `content`: Message content (for tool results)
575599
- `tool_calls`: Array of tool calls made by the assistant
576600
- `tool_call_id`: ID of the tool call this message responds to
577601
578602
**Example Response**:
603+
579604
```json
580605
{
581606
"function": {
@@ -592,10 +617,12 @@ This tool provides the source documents that were used by the LLM to generate re
592617
**Purpose**: Shows users which documents were used to generate responses, improving trust and enabling source verification.
593618
594619
**Arguments**:
620+
595621
- `call_id`: Matches the `call_id` from `_meiliSearchProgress` to associate queries with results
596622
- `documents`: JSON object containing the source documents with only displayed attributes
597623
598624
**Example Response**:
625+
599626
```json
600627
{
601628
"function": {

0 commit comments

Comments
 (0)