|
| 1 | +--- |
| 2 | +title: Getting started with conversational search |
| 3 | +sidebarTitle: Getting started with chat |
| 4 | +description: Learn how to implement AI-powered conversational search in your application |
| 5 | +--- |
| 6 | + |
| 7 | +import { Warning, Note } from '/snippets/notice_tag.mdx' |
| 8 | + |
| 9 | +This guide walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application. |
| 10 | + |
| 11 | +<Warning> |
| 12 | +The chat completions feature is experimental and must be enabled before use. See [experimental features](/reference/api/experimental_features) for activation instructions. |
| 13 | +</Warning> |
| 14 | + |
| 15 | +## Prerequisites |
| 16 | + |
| 17 | +Before starting, ensure you have: |
| 18 | +- Meilisearch instance running (v1.15.1 or later) |
| 19 | +- An API key from an LLM provider (OpenAI, Azure OpenAI, Mistral, Gemini, or access to a vLLM server) |
| 20 | +- At least one index with searchable content |
| 21 | +- The chat completions experimental feature enabled |
| 22 | + |
| 23 | +## Quick start |
| 24 | + |
| 25 | +### Enable the chat completions feature |
| 26 | + |
| 27 | +First, enable the chat completions experimental feature: |
| 28 | + |
| 29 | +```bash |
| 30 | +curl \ |
| 31 | + -X PATCH 'http://localhost:7700/experimental-features' \ |
| 32 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 33 | + -H 'Content-Type: application/json' \ |
| 34 | + --data-binary '{ |
| 35 | + "chatCompletions": true |
| 36 | + }' |
| 37 | +``` |
| 38 | + |
| 39 | +### Configure a chat completions workspace |
| 40 | + |
| 41 | +Create a workspace with your LLM provider settings. Here are examples for different providers: |
| 42 | + |
| 43 | +<CodeGroup> |
| 44 | + |
| 45 | +```bash openAi |
| 46 | +curl \ |
| 47 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 48 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 49 | + -H 'Content-Type: application/json' \ |
| 50 | + --data-binary '{ |
| 51 | + "source": "openAi", |
| 52 | + "apiKey": "sk-abc...", |
| 53 | + "prompts": { |
| 54 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 55 | + } |
| 56 | + }' |
| 57 | +``` |
| 58 | + |
| 59 | +```bash azureOpenAi |
| 60 | +curl \ |
| 61 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 62 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 63 | + -H 'Content-Type: application/json' \ |
| 64 | + --data-binary '{ |
| 65 | + "source": "azureOpenAi", |
| 66 | + "apiKey": "your-azure-key", |
| 67 | + "baseUrl": "https://your-resource.openai.azure.com", |
| 68 | + "prompts": { |
| 69 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 70 | + } |
| 71 | + }' |
| 72 | +``` |
| 73 | + |
| 74 | +```bash mistral |
| 75 | +curl \ |
| 76 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 77 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 78 | + -H 'Content-Type: application/json' \ |
| 79 | + --data-binary '{ |
| 80 | + "source": "mistral", |
| 81 | + "apiKey": "your-mistral-key", |
| 82 | + "prompts": { |
| 83 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 84 | + } |
| 85 | + }' |
| 86 | +``` |
| 87 | + |
| 88 | +```bash gemini |
| 89 | +curl \ |
| 90 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 91 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 92 | + -H 'Content-Type: application/json' \ |
| 93 | + --data-binary '{ |
| 94 | + "source": "gemini", |
| 95 | + "apiKey": "your-gemini-key", |
| 96 | + "prompts": { |
| 97 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 98 | + } |
| 99 | + }' |
| 100 | +``` |
| 101 | + |
| 102 | +```bash vLlm |
| 103 | +curl \ |
| 104 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 105 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 106 | + -H 'Content-Type: application/json' \ |
| 107 | + --data-binary '{ |
| 108 | + "source": "vLlm", |
| 109 | + "baseUrl": "http://localhost:8000", |
| 110 | + "prompts": { |
| 111 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 112 | + } |
| 113 | + }' |
| 114 | +``` |
| 115 | + |
| 116 | +</CodeGroup> |
| 117 | + |
| 118 | +### Send your first chat completions request |
| 119 | + |
| 120 | +Now you can start a conversation: |
| 121 | + |
| 122 | +```bash |
| 123 | +curl \ |
| 124 | + -X POST 'http://localhost:7700/chats/my-assistant/chat/completions' \ |
| 125 | + -H 'Authorization: Bearer DEFAULT_CHAT_KEY' \ |
| 126 | + -H 'Content-Type: application/json' \ |
| 127 | + --data-binary '{ |
| 128 | + "model": "gpt-3.5-turbo", |
| 129 | + "messages": [ |
| 130 | + { |
| 131 | + "role": "user", |
| 132 | + "content": "What is Meilisearch?" |
| 133 | + } |
| 134 | + ], |
| 135 | + "stream": true |
| 136 | + }' |
| 137 | +``` |
| 138 | + |
| 139 | +## Understanding workspaces |
| 140 | + |
| 141 | +Workspaces allow you to create isolated chat configurations for different use cases: |
| 142 | + |
| 143 | +- **Customer support**: Configure with support-focused prompts |
| 144 | +- **Product search**: Optimize for e-commerce queries |
| 145 | +- **Documentation**: Tune for technical Q&A |
| 146 | + |
| 147 | +Each workspace maintains its own: |
| 148 | +- LLM provider configuration |
| 149 | +- System prompt |
| 150 | + |
| 151 | +## Building a chat interface with OpenAI SDK |
| 152 | + |
| 153 | +Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official OpenAI SDK: |
| 154 | + |
| 155 | +<CodeGroup> |
| 156 | + |
| 157 | +```javascript JavaScript |
| 158 | +import OpenAI from 'openai'; |
| 159 | + |
| 160 | +const client = new OpenAI({ |
| 161 | + baseURL: 'http://localhost:7700/chats/my-assistant', |
| 162 | + apiKey: 'YOUR_MEILISEARCH_API_KEY', |
| 163 | +}); |
| 164 | + |
| 165 | +const completion = await client.chat.completions.create({ |
| 166 | + model: 'gpt-3.5-turbo', |
| 167 | + messages: [{ role: 'user', content: 'What is Meilisearch?' }], |
| 168 | + stream: true, |
| 169 | +}); |
| 170 | + |
| 171 | +for await (const chunk of completion) { |
| 172 | + console.log(chunk.choices[0]?.delta?.content || ''); |
| 173 | +} |
| 174 | +``` |
| 175 | +
|
| 176 | +```python Python |
| 177 | +from openai import OpenAI |
| 178 | + |
| 179 | +client = OpenAI( |
| 180 | + base_url="http://localhost:7700/chats/my-assistant", |
| 181 | + api_key="YOUR_MEILISEARCH_API_KEY" |
| 182 | +) |
| 183 | + |
| 184 | +stream = client.chat.completions.create( |
| 185 | + model="gpt-3.5-turbo", |
| 186 | + messages=[{"role": "user", "content": "What is Meilisearch?"}], |
| 187 | + stream=True, |
| 188 | +) |
| 189 | + |
| 190 | +for chunk in stream: |
| 191 | + if chunk.choices[0].delta.content is not None: |
| 192 | + print(chunk.choices[0].delta.content, end="") |
| 193 | +``` |
| 194 | +
|
| 195 | +```typescript TypeScript |
| 196 | +import OpenAI from 'openai'; |
| 197 | + |
| 198 | +const client = new OpenAI({ |
| 199 | + baseURL: 'http://localhost:7700/chats/my-assistant', |
| 200 | + apiKey: 'YOUR_MEILISEARCH_API_KEY', |
| 201 | +}); |
| 202 | + |
| 203 | +const stream = await client.chat.completions.create({ |
| 204 | + model: 'gpt-3.5-turbo', |
| 205 | + messages: [{ role: 'user', content: 'What is Meilisearch?' }], |
| 206 | + stream: true, |
| 207 | +}); |
| 208 | + |
| 209 | +for await (const chunk of stream) { |
| 210 | + const content = chunk.choices[0]?.delta?.content || ''; |
| 211 | + process.stdout.write(content); |
| 212 | +} |
| 213 | +``` |
| 214 | +
|
| 215 | +</CodeGroup> |
| 216 | +
|
| 217 | +## Error handling |
| 218 | +
|
| 219 | +When using the OpenAI SDK with Meilisearch's chat completions endpoint, errors from the streamed responses are natively handled by the official OpenAI SDK. This means you can use the SDK's built-in error handling mechanisms without additional configuration: |
| 220 | +
|
| 221 | +<CodeGroup> |
| 222 | +
|
| 223 | +```javascript JavaScript |
| 224 | +import OpenAI from 'openai'; |
| 225 | + |
| 226 | +const client = new OpenAI({ |
| 227 | + baseURL: 'http://localhost:7700/chats/my-assistant', |
| 228 | + apiKey: 'YOUR_MEILISEARCH_API_KEY', |
| 229 | +}); |
| 230 | + |
| 231 | +try { |
| 232 | + const stream = await client.chat.completions.create({ |
| 233 | + model: 'gpt-3.5-turbo', |
| 234 | + messages: [{ role: 'user', content: 'What is Meilisearch?' }], |
| 235 | + stream: true, |
| 236 | + }); |
| 237 | + |
| 238 | + for await (const chunk of stream) { |
| 239 | + console.log(chunk.choices[0]?.delta?.content || ''); |
| 240 | + } |
| 241 | +} catch (error) { |
| 242 | + // OpenAI SDK automatically handles streaming errors |
| 243 | + console.error('Chat completion error:', error); |
| 244 | +} |
| 245 | +``` |
| 246 | +
|
| 247 | +```python Python |
| 248 | +from openai import OpenAI |
| 249 | + |
| 250 | +client = OpenAI( |
| 251 | + base_url="http://localhost:7700/chats/my-assistant", |
| 252 | + api_key="YOUR_MEILISEARCH_API_KEY" |
| 253 | +) |
| 254 | + |
| 255 | +try: |
| 256 | + stream = client.chat.completions.create( |
| 257 | + model="gpt-3.5-turbo", |
| 258 | + messages=[{"role": "user", "content": "What is Meilisearch?"}], |
| 259 | + stream=True, |
| 260 | + ) |
| 261 | + |
| 262 | + for chunk in stream: |
| 263 | + if chunk.choices[0].delta.content is not None: |
| 264 | + print(chunk.choices[0].delta.content, end="") |
| 265 | +except Exception as error: |
| 266 | + # OpenAI SDK automatically handles streaming errors |
| 267 | + print(f"Chat completion error: {error}") |
| 268 | +``` |
| 269 | +
|
| 270 | +</CodeGroup> |
| 271 | +
|
| 272 | +## Next steps |
| 273 | +
|
| 274 | +- Explore [advanced chat API features](/reference/api/chats) |
| 275 | +- Learn about [conversational search concepts](/learn/ai_powered_search/conversational_search_with_chat) |
| 276 | +- Review [security best practices](/learn/security/basic_security) |
0 commit comments