Skip to content

Commit dd301d6

Browse files
improve explanation and tutorial
1 parent fc1f58b commit dd301d6

File tree

2 files changed

+25
-141
lines changed

2 files changed

+25
-141
lines changed

learn/chat/conversational_search.mdx

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,7 @@ title: What is conversational search?
33
description: Conversational search is an AI-powered feature that allows users to ask questions in everyday language and receive answers based on the information in Meilisearch's indexes
44
---
55

6-
## What is conversational search?
7-
8-
In conversational search interfaces, users ask questions in everyday language instead of using keywords, and receive complete answers rather than links to articles.
9-
10-
## When to use chat completions vs traditional search
6+
## When to use conversational vs traditional search
117

128
Use conversational search when:
139

@@ -21,7 +17,7 @@ Use traditional search when:
2117
- Approximative answers are not acceptable
2218
- Your users need very quick responses
2319

24-
## How chat completions differs from traditional search
20+
## How conversational search usage differs from traditional search
2521

2622
### Traditional search workflow
2723

@@ -41,7 +37,7 @@ Use traditional search when:
4137

4238
In the majority of cases, you should use the [`/chats` route](/reference/api/chats) to build a Retrieval Augmented Generation (RAG) pipeline. RAGs excel when working with unstructured data and emphasise high-quality responses.
4339

44-
Meilisearch's chat completions consolidates RAG creation into a single process:
40+
Meilisearch's chat completions API consolidates RAG creation into a single process:
4541

4642
1. **Query understanding**: automatically transforms questions into search parameters
4743
2. **Hybrid retrieval**: combines keyword and semantic search for better relevancy

learn/chat/getting_started_with_chat.mdx

Lines changed: 22 additions & 134 deletions
Original file line numberDiff line numberDiff line change
@@ -1,33 +1,9 @@
11
---
22
title: Getting started with conversational search
3-
description: This guide walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application
3+
description: This article walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application.
44
---
55

6-
Chat completions have three key components: index integration, workspace configuration, and the chat interface.
7-
8-
operate through workspaces, which are isolated configurations for different use cases. Each workspace can:
9-
10-
- Use different LLM sources (openAi, azureOpenAi, mistral, gemini, vLlm)
11-
- Apply custom prompts
12-
- Access specific indexes based on API keys
13-
- Maintain separate conversation contexts
14-
15-
### Key components
16-
17-
1. **Chat endpoint**: `/chats/{workspace}/chat/completions`
18-
- OpenAI-compatible interface
19-
- Supports streaming responses
20-
- Handles tool calling for index searches
21-
22-
2. **Workspace settings**: `/chats/{workspace}/settings`
23-
- Configure LLM provider and model
24-
- Set system prompts
25-
- Manage API credentials
26-
27-
3. **Index integration**:
28-
- Automatically searches relevant indexes
29-
- Uses existing Meilisearch search capabilities
30-
- Respects API key permissions
6+
To successfully implement a conversational search interface you must follow three steps: configure indexes for chat usage, create chat workspaces targeting different use-cases, and building a chat interface.
317

328
## Prerequisites
339

@@ -82,11 +58,9 @@ curl \
8258
}'
8359
```
8460

85-
## Configure your indexes for chat
61+
## Configure your indexes
8662

87-
After activating the `/chats` route and obtaining an API key with chat access, you must configure the indexes your conversational interface has access to.
88-
89-
Configure the `chat` settings for each index you want to be searchable via chat UI:
63+
After activating the `/chats` route and obtaining an API key with chat permissions, configure the `chat` settings for each index you want to be searchable via chat UI:
9064

9165
```bash
9266
curl \
@@ -103,17 +77,23 @@ curl \
10377
```
10478

10579
- `description` gives the initial context of the conversation to the LLM. A good description improves relevance of the chat's answers
106-
- `documentTemplate` defines which document fields Meilisearch will send the AI provider. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance
80+
- `documentTemplate` defines the document data Meilisearch sends to the AI provider. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance
10781

10882
## Configure a chat completions workspace
10983

110-
The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. For example, you may have one workspace for publicly visible data, and another for data only available for logged in users.
84+
The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. Each workspace can:
85+
86+
- Use different embedding providers (openAi, azureOpenAi, mistral, gemini, vLlm)
87+
- Establish separate conversation contexts via baseline prompts
88+
- Access a specific set of indexes
89+
90+
For example, you may have one workspace for publicly visible data, and another for data only available for logged in users.
11191

11292
Create a workspace setting your LLM provider as its `source`:
11393

11494
<CodeGroup>
11595

116-
```bash openAi
96+
```bash OpenAI
11797
curl \
11898
-X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \
11999
-H 'Authorization: Bearer MEILISEARCH_KEY' \
@@ -128,7 +108,7 @@ curl \
128108
}'
129109
```
130110

131-
```bash azureOpenAi
111+
```bash Azure OpenAI
132112
curl \
133113
-X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \
134114
-H 'Authorization: Bearer MEILISEARCH_KEY' \
@@ -143,7 +123,7 @@ curl \
143123
}'
144124
```
145125

146-
```bash mistral
126+
```bash Mistral
147127
curl \
148128
-X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \
149129
-H 'Authorization: Bearer MEILISEARCH_KEY' \
@@ -157,7 +137,7 @@ curl \
157137
}'
158138
```
159139

160-
```bash gemini
140+
```bash Gemini
161141
curl \
162142
-X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \
163143
-H 'Authorization: Bearer MEILISEARCH_KEY' \
@@ -171,7 +151,7 @@ curl \
171151
}'
172152
```
173153

174-
```bash vLlm
154+
```bash vLLM
175155
curl \
176156
-X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \
177157
-H 'Authorization: Bearer MEILISEARCH_KEY' \
@@ -191,11 +171,11 @@ Which fields are mandatory will depend on your chosen provider `source`. In most
191171

192172
`baseUrl` indicates the URL Meilisearch queries when users submit questions to your chat interface.
193173

194-
`prompts.system` gives the conversational search bot the baseline context of your users and their questions.
174+
`prompts.system` gives the conversational search bot the baseline context of your users and their questions. The `prompts` object accepts a few other fields that provide more information to improve how the agent uses the information it finds via Meilisearch. In real-life scenarios filling these fields would improve the quality of conversational search results.
195175

196176
## Send your first chat completions request
197177

198-
You have finished configuring your conversational search agent. Use `curl` in your terminal to confirm everything is working. Sending a streaming query to the chat completions API route:
178+
You have finished configuring your conversational search agent. To test everything is working as expected, send a streaming `curl` query to the chat completions API route:
199179

200180
```bash
201181
curl -N \
@@ -251,11 +231,9 @@ In most cases, that is only the beginning of adding conversational search to you
251231

252232
### Building a chat interface using the OpenAI SDK
253233

254-
Creating a full chat interface is out of scope for this tutorial, but here is one important tip.
234+
Meilisearch's chat endpoint was designed to be OpenAI-compatible. This means you can use the official OpenAI SDK in any supported programming language, even if your provider is not OpenAI.
255235

256-
Meilisearch's chat endpoint was designed to be OpenAI-compatible. This means you can use the official OpenAI SDK in any supported programming language, even if your provider is not OpenAI:
257-
258-
<CodeGroup>
236+
Integrating Meiliearch and the OpenAI SDK with JavaScript would look lik this:
259237

260238
```javascript JavaScript
261239
import OpenAI from 'openai';
@@ -275,97 +253,7 @@ for await (const chunk of completion) {
275253
}
276254
```
277255
278-
```python Python
279-
from openai import OpenAI
280-
281-
client = OpenAI(
282-
base_url="http://localhost:7700/chats/WORKSPACE_NAME",
283-
api_key="YOUR_CHAT_API_KEY"
284-
)
285-
286-
stream = client.chat.completions.create(
287-
model="gpt-3.5-turbo",
288-
messages=[{"role": "user", "content": "USER_PROMPT"}]
289-
)
290-
291-
for chunk in stream:
292-
if chunk.choices[0].delta.content is not None:
293-
print(chunk.choices[0].delta.content, end="")
294-
```
295-
296-
```typescript TypeScript
297-
import OpenAI from 'openai';
298-
299-
const client = new OpenAI({
300-
baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME',
301-
apiKey: 'YOUR_CHAT_API_KEY',
302-
});
303-
304-
const stream = await client.chat.completions.create({
305-
model: 'gpt-3.5-turbo',
306-
messages: [{ role: 'user', content: 'USER_PROMPT' }]
307-
});
308-
309-
for await (const chunk of stream) {
310-
const content = chunk.choices[0]?.delta?.content || '';
311-
process.stdout.write(content);
312-
}
313-
```
314-
315-
</CodeGroup>
316-
317-
### Error handling
318-
319-
Use the OpenAI SDK's built-in functionality to handle errors without additional configuration:
320-
321-
<CodeGroup>
322-
323-
```javascript JavaScript
324-
import OpenAI from 'openai';
325-
326-
const client = new OpenAI({
327-
baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME',
328-
apiKey: 'MEILISEARCH_KEY',
329-
});
330-
331-
try {
332-
const stream = await client.chat.completions.create({
333-
model: 'gpt-3.5-turbo',
334-
messages: [{ role: 'user', content: 'What is Meilisearch?' }],
335-
stream: true,
336-
});
337-
338-
for await (const chunk of stream) {
339-
console.log(chunk.choices[0]?.delta?.content || '');
340-
}
341-
} catch (error) {
342-
// OpenAI SDK automatically handles streaming errors
343-
console.error('Chat completion error:', error);
344-
}
345-
```
346-
347-
```python Python
348-
from openai import OpenAI
349-
350-
client = OpenAI(
351-
base_url="http://localhost:7700/chats/WORKSPACE_NAME",
352-
api_key="MEILISEARCH_KEY"
353-
)
354-
355-
try:
356-
stream = client.chat.completions.create(
357-
model="gpt-3.5-turbo",
358-
messages=[{"role": "user", "content": "What is Meilisearch?"}],
359-
stream=True,
360-
)
361-
362-
for chunk in stream:
363-
if chunk.choices[0].delta.content is not None:
364-
print(chunk.choices[0].delta.content, end="")
365-
except Exception as error:
366-
# OpenAI SDK automatically handles streaming errors
367-
print(f"Chat completion error: {error}")
368-
```
256+
Take particular note of the last lines, which output the streamed responses to the browser console. In a real-life application, you would instead print the response chunks to the user interface.
369257
370258
</CodeGroup>
371259

0 commit comments

Comments
 (0)