Skip to content

Commit f707cfa

Browse files
committed
Add documentation for Meilisearch chatCompletions feature
- Add conceptual overview of conversational search in learn/ai_powered_search/ - Add comprehensive API reference for /chats endpoints in reference/api/ - Add practical implementation guide in guides/ai/ - Update experimental features page to include chatCompletions - Support all 5 LLM sources: openAi, azureOpenAi, mistral, gemini, vLlm - Document OpenAI SDK compatibility with code examples - Add navigation entries in docs.json under Artificial intelligence section
1 parent b9ba8b3 commit f707cfa

File tree

5 files changed

+727
-3
lines changed

5 files changed

+727
-3
lines changed

docs.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,7 @@
165165
"group": "AI-powered search",
166166
"pages": [
167167
"learn/ai_powered_search/getting_started_with_ai_search",
168+
"learn/ai_powered_search/conversational_search_with_chat",
168169
"learn/ai_powered_search/configure_rest_embedder",
169170
"learn/ai_powered_search/document_template_best_practices",
170171
"learn/ai_powered_search/image_search_with_user_provided_embeddings",
@@ -329,6 +330,7 @@
329330
"reference/api/network",
330331
"reference/api/similar",
331332
"reference/api/facet_search",
333+
"reference/api/chats",
332334
"reference/api/tasks",
333335
"reference/api/batches",
334336
"reference/api/keys",
@@ -377,6 +379,7 @@
377379
{
378380
"group": "Artificial intelligence",
379381
"pages": [
382+
"guides/ai/getting_started_with_chat",
380383
"guides/ai/mcp",
381384
"guides/embedders/openai",
382385
"guides/langchain",
Lines changed: 222 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,222 @@
1+
---
2+
title: Getting started with conversational search
3+
sidebarTitle: Getting started with chat
4+
description: Learn how to implement AI-powered conversational search in your application
5+
---
6+
7+
import { Warning, Note } from '/snippets/notice_tag.mdx'
8+
9+
This guide walks you through implementing Meilisearch's chatCompletions feature to create conversational search experiences in your application.
10+
11+
<Warning>
12+
The chatCompletions feature is experimental and must be enabled before use. See [experimental features](/reference/api/experimental_features) for activation instructions.
13+
</Warning>
14+
15+
## Prerequisites
16+
17+
Before starting, ensure you have:
18+
- Meilisearch instance running (v1.15.1 or later)
19+
- An API key from an LLM provider (OpenAI, Azure OpenAI, Mistral, Gemini, or access to a vLLM server)
20+
- At least one index with searchable content
21+
- The chatCompletions experimental feature enabled
22+
23+
## Quick start
24+
25+
### 1. Enable the chatCompletions feature
26+
27+
First, enable the chatCompletions experimental feature:
28+
29+
```bash
30+
curl \
31+
-X PATCH 'http://localhost:7700/experimental-features' \
32+
-H 'Authorization: Bearer MASTER_KEY' \
33+
-H 'Content-Type: application/json' \
34+
--data-binary '{
35+
"chatCompletions": true
36+
}'
37+
```
38+
39+
### 2. Configure a chatCompletions workspace
40+
41+
Create a workspace with your LLM provider settings. Here are examples for different providers:
42+
43+
<CodeGroup>
44+
45+
```bash openAi
46+
curl \
47+
-X PATCH 'http://localhost:7700/chats/my-assistant/settings' \
48+
-H 'Authorization: Bearer MASTER_KEY' \
49+
-H 'Content-Type: application/json' \
50+
--data-binary '{
51+
"source": "openAi",
52+
"apiKey": "sk-...",
53+
"prompts": {
54+
"system": "You are a helpful assistant. Answer questions based only on the provided context."
55+
}
56+
}'
57+
```
58+
59+
```bash azureOpenAi
60+
curl \
61+
-X PATCH 'http://localhost:7700/chats/my-assistant/settings' \
62+
-H 'Authorization: Bearer MASTER_KEY' \
63+
-H 'Content-Type: application/json' \
64+
--data-binary '{
65+
"source": "azureOpenAi",
66+
"apiKey": "your-azure-key",
67+
"baseUrl": "https://your-resource.openai.azure.com",
68+
"prompts": {
69+
"system": "You are a helpful assistant. Answer questions based only on the provided context."
70+
}
71+
}'
72+
```
73+
74+
```bash mistral
75+
curl \
76+
-X PATCH 'http://localhost:7700/chats/my-assistant/settings' \
77+
-H 'Authorization: Bearer MASTER_KEY' \
78+
-H 'Content-Type: application/json' \
79+
--data-binary '{
80+
"source": "mistral",
81+
"apiKey": "your-mistral-key",
82+
"prompts": {
83+
"system": "You are a helpful assistant. Answer questions based only on the provided context."
84+
}
85+
}'
86+
```
87+
88+
```bash gemini
89+
curl \
90+
-X PATCH 'http://localhost:7700/chats/my-assistant/settings' \
91+
-H 'Authorization: Bearer MASTER_KEY' \
92+
-H 'Content-Type: application/json' \
93+
--data-binary '{
94+
"source": "gemini",
95+
"apiKey": "your-gemini-key",
96+
"prompts": {
97+
"system": "You are a helpful assistant. Answer questions based only on the provided context."
98+
}
99+
}'
100+
```
101+
102+
```bash vLlm
103+
curl \
104+
-X PATCH 'http://localhost:7700/chats/my-assistant/settings' \
105+
-H 'Authorization: Bearer MASTER_KEY' \
106+
-H 'Content-Type: application/json' \
107+
--data-binary '{
108+
"source": "vllm",
109+
"baseUrl": "http://localhost:8000",
110+
"prompts": {
111+
"system": "You are a helpful assistant. Answer questions based only on the provided context."
112+
}
113+
}'
114+
```
115+
116+
</CodeGroup>
117+
118+
### 3. Send your first chatCompletions request
119+
120+
Now you can start a conversation:
121+
122+
```bash
123+
curl \
124+
-X POST 'http://localhost:7700/chats/my-assistant/chat/completions' \
125+
-H 'Authorization: Bearer DEFAULT_CHAT_KEY' \
126+
-H 'Content-Type: application/json' \
127+
--data-binary '{
128+
"model": "gpt-3.5-turbo",
129+
"messages": [
130+
{
131+
"role": "user",
132+
"content": "What is Meilisearch?"
133+
}
134+
],
135+
"stream": true
136+
}'
137+
```
138+
139+
## Understanding workspaces
140+
141+
Workspaces allow you to create isolated chat configurations for different use cases:
142+
143+
- **Customer support**: Configure with support-focused prompts
144+
- **Product search**: Optimize for e-commerce queries
145+
- **Documentation**: Tune for technical Q&A
146+
147+
Each workspace maintains its own:
148+
- LLM provider configuration
149+
- System prompt
150+
- Access permissions
151+
152+
## Building a chat interface with OpenAI SDK
153+
154+
Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official OpenAI SDK:
155+
156+
<CodeGroup>
157+
158+
```javascript JavaScript
159+
import OpenAI from 'openai';
160+
161+
const client = new OpenAI({
162+
baseURL: 'http://localhost:7700/chats/my-assistant',
163+
apiKey: 'YOUR_MEILISEARCH_API_KEY',
164+
});
165+
166+
const completion = await client.chat.completions.create({
167+
model: 'gpt-3.5-turbo',
168+
messages: [{ role: 'user', content: 'What is Meilisearch?' }],
169+
stream: true,
170+
});
171+
172+
for await (const chunk of completion) {
173+
console.log(chunk.choices[0]?.delta?.content || '');
174+
}
175+
```
176+
177+
```python Python
178+
from openai import OpenAI
179+
180+
client = OpenAI(
181+
base_url="http://localhost:7700/chats/my-assistant",
182+
api_key="YOUR_MEILISEARCH_API_KEY"
183+
)
184+
185+
stream = client.chat.completions.create(
186+
model="gpt-3.5-turbo",
187+
messages=[{"role": "user", "content": "What is Meilisearch?"}],
188+
stream=True,
189+
)
190+
191+
for chunk in stream:
192+
if chunk.choices[0].delta.content is not None:
193+
print(chunk.choices[0].delta.content, end="")
194+
```
195+
196+
```typescript TypeScript
197+
import OpenAI from 'openai';
198+
199+
const client = new OpenAI({
200+
baseURL: 'http://localhost:7700/chats/my-assistant',
201+
apiKey: 'YOUR_MEILISEARCH_API_KEY',
202+
});
203+
204+
const stream = await client.chat.completions.create({
205+
model: 'gpt-3.5-turbo',
206+
messages: [{ role: 'user', content: 'What is Meilisearch?' }],
207+
stream: true,
208+
});
209+
210+
for await (const chunk of stream) {
211+
const content = chunk.choices[0]?.delta?.content || '';
212+
process.stdout.write(content);
213+
}
214+
```
215+
216+
</CodeGroup>
217+
218+
## Next steps
219+
220+
- Explore [advanced chat API features](/reference/api/chats)
221+
- Learn about [conversational search concepts](/learn/ai_powered_search/conversational_search_with_chat)
222+
- Review [security best practices](/learn/security/basic_security)
Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
---
2+
title: Conversational search with chat
3+
sidebarTitle: Conversational search
4+
description: Learn how to implement AI-powered conversational search using Meilisearch's chat feature
5+
---
6+
7+
import { Warning } from '/snippets/notice_tag.mdx'
8+
9+
Meilisearch's chatCompletions feature enables AI-powered conversational search, allowing users to ask questions in natural language and receive direct answers based on your indexed content. This feature transforms the traditional search experience into an interactive dialogue.
10+
11+
<Warning>
12+
The chatCompletions feature is experimental and must be enabled through [experimental features](/reference/api/experimental_features). API specifications may change in future releases.
13+
</Warning>
14+
15+
## What is conversational search?
16+
17+
Conversational search allows users to:
18+
- Ask questions in natural language instead of using keywords
19+
- Receive direct answers rather than just document links
20+
- Maintain context across multiple questions
21+
- Get responses grounded in your actual content
22+
23+
This approach bridges the gap between traditional search and modern AI experiences, making information more accessible and intuitive to find.
24+
25+
## How chatCompletions differs from traditional search
26+
27+
### Traditional search workflow
28+
1. User enters keywords
29+
2. Meilisearch returns matching documents
30+
3. User reviews results to find answers
31+
32+
### Conversational search workflow
33+
1. User asks a question in natural language
34+
2. Meilisearch retrieves relevant documents
35+
3. AI generates a direct answer based on those documents
36+
4. User can ask follow-up questions
37+
38+
## RAG implementation simplified
39+
40+
The chatCompletions feature implements a complete Retrieval Augmented Generation (RAG) pipeline in a single API endpoint. Traditional RAG implementations require:
41+
42+
- Multiple LLM calls for query optimization
43+
- Separate vector database for semantic search
44+
- Custom reranking solutions
45+
- Complex pipeline management
46+
47+
Meilisearch's chatCompletions consolidates these into one streamlined process:
48+
49+
1. **Query understanding**: Automatically transforms questions into optimal search parameters
50+
2. **Hybrid retrieval**: Combines keyword and semantic search for superior relevance
51+
3. **Answer generation**: Uses your chosen LLM to generate responses
52+
4. **Context management**: Maintains conversation history automatically
53+
54+
## When to use chatCompletions vs traditional search
55+
56+
### Use conversational search when:
57+
- Users need direct answers to specific questions
58+
- Content is informational (documentation, knowledge bases, FAQs)
59+
- Users benefit from follow-up questions
60+
- Natural language interaction improves user experience
61+
62+
### Use traditional search when:
63+
- Users need to browse multiple options
64+
- Results require comparison (e-commerce products, listings)
65+
- Exact matching is critical
66+
- Response time is paramount
67+
68+
## Architecture overview
69+
70+
The chatCompletions feature operates through workspaces, which are isolated configurations for different use cases or tenants. Each workspace can:
71+
72+
- Use different LLM sources (openAi, azureOpenAi, mistral, gemini, vLlm)
73+
- Apply custom prompts
74+
- Access specific indexes based on API keys
75+
- Maintain separate conversation contexts
76+
77+
### Key components
78+
79+
1. **Chat endpoint**: `/chats/{workspace}/chat/completions`
80+
- OpenAI-compatible interface
81+
- Supports streaming responses
82+
- Handles tool calling for index searches
83+
84+
2. **Workspace settings**: `/chats/{workspace}/settings`
85+
- Configure LLM provider and model
86+
- Set system prompts
87+
- Manage API credentials
88+
89+
3. **Index integration**:
90+
- Automatically searches relevant indexes
91+
- Uses existing Meilisearch search capabilities
92+
- Respects API key permissions
93+
94+
## Security considerations
95+
96+
The chatCompletions feature integrates with Meilisearch's existing security model:
97+
98+
- **API key permissions**: Chat only accesses indexes visible to the provided API key
99+
- **Tenant tokens**: Support for multi-tenant applications
100+
- **LLM credentials**: Stored securely in workspace settings
101+
- **Content isolation**: Responses based only on indexed content
102+
103+
## Next steps
104+
105+
- [Get started with chatCompletions implementation](/guides/ai/getting_started_with_chat)
106+
- [Explore the chatCompletions API reference](/reference/api/chats)
107+
- [Learn about experimental features](/reference/api/experimental_features)

0 commit comments

Comments
 (0)