|
| 1 | +--- |
| 2 | +title: Getting started with conversational search |
| 3 | +sidebarTitle: Getting started with chat |
| 4 | +description: Learn how to implement AI-powered conversational search in your application |
| 5 | +--- |
| 6 | + |
| 7 | +import { Warning, Note } from '/snippets/notice_tag.mdx' |
| 8 | + |
| 9 | +This guide walks you through implementing Meilisearch's chatCompletions feature to create conversational search experiences in your application. |
| 10 | + |
| 11 | +<Warning> |
| 12 | +The chatCompletions feature is experimental and must be enabled before use. See [experimental features](/reference/api/experimental_features) for activation instructions. |
| 13 | +</Warning> |
| 14 | + |
| 15 | +## Prerequisites |
| 16 | + |
| 17 | +Before starting, ensure you have: |
| 18 | +- Meilisearch instance running (v1.15.1 or later) |
| 19 | +- An API key from an LLM provider (OpenAI, Azure OpenAI, Mistral, Gemini, or access to a vLLM server) |
| 20 | +- At least one index with searchable content |
| 21 | +- The chatCompletions experimental feature enabled |
| 22 | + |
| 23 | +## Quick start |
| 24 | + |
| 25 | +### 1. Enable the chatCompletions feature |
| 26 | + |
| 27 | +First, enable the chatCompletions experimental feature: |
| 28 | + |
| 29 | +```bash |
| 30 | +curl \ |
| 31 | + -X PATCH 'http://localhost:7700/experimental-features' \ |
| 32 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 33 | + -H 'Content-Type: application/json' \ |
| 34 | + --data-binary '{ |
| 35 | + "chatCompletions": true |
| 36 | + }' |
| 37 | +``` |
| 38 | + |
| 39 | +### 2. Configure a chatCompletions workspace |
| 40 | + |
| 41 | +Create a workspace with your LLM provider settings. Here are examples for different providers: |
| 42 | + |
| 43 | +<CodeGroup> |
| 44 | + |
| 45 | +```bash openAi |
| 46 | +curl \ |
| 47 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 48 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 49 | + -H 'Content-Type: application/json' \ |
| 50 | + --data-binary '{ |
| 51 | + "source": "openAi", |
| 52 | + "apiKey": "sk-...", |
| 53 | + "prompts": { |
| 54 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 55 | + } |
| 56 | + }' |
| 57 | +``` |
| 58 | + |
| 59 | +```bash azureOpenAi |
| 60 | +curl \ |
| 61 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 62 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 63 | + -H 'Content-Type: application/json' \ |
| 64 | + --data-binary '{ |
| 65 | + "source": "azureOpenAi", |
| 66 | + "apiKey": "your-azure-key", |
| 67 | + "baseUrl": "https://your-resource.openai.azure.com", |
| 68 | + "prompts": { |
| 69 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 70 | + } |
| 71 | + }' |
| 72 | +``` |
| 73 | + |
| 74 | +```bash mistral |
| 75 | +curl \ |
| 76 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 77 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 78 | + -H 'Content-Type: application/json' \ |
| 79 | + --data-binary '{ |
| 80 | + "source": "mistral", |
| 81 | + "apiKey": "your-mistral-key", |
| 82 | + "prompts": { |
| 83 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 84 | + } |
| 85 | + }' |
| 86 | +``` |
| 87 | + |
| 88 | +```bash gemini |
| 89 | +curl \ |
| 90 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 91 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 92 | + -H 'Content-Type: application/json' \ |
| 93 | + --data-binary '{ |
| 94 | + "source": "gemini", |
| 95 | + "apiKey": "your-gemini-key", |
| 96 | + "prompts": { |
| 97 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 98 | + } |
| 99 | + }' |
| 100 | +``` |
| 101 | + |
| 102 | +```bash vLlm |
| 103 | +curl \ |
| 104 | + -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ |
| 105 | + -H 'Authorization: Bearer MASTER_KEY' \ |
| 106 | + -H 'Content-Type: application/json' \ |
| 107 | + --data-binary '{ |
| 108 | + "source": "vllm", |
| 109 | + "baseUrl": "http://localhost:8000", |
| 110 | + "prompts": { |
| 111 | + "system": "You are a helpful assistant. Answer questions based only on the provided context." |
| 112 | + } |
| 113 | + }' |
| 114 | +``` |
| 115 | + |
| 116 | +</CodeGroup> |
| 117 | + |
| 118 | +### 3. Send your first chatCompletions request |
| 119 | + |
| 120 | +Now you can start a conversation: |
| 121 | + |
| 122 | +```bash |
| 123 | +curl \ |
| 124 | + -X POST 'http://localhost:7700/chats/my-assistant/chat/completions' \ |
| 125 | + -H 'Authorization: Bearer DEFAULT_CHAT_KEY' \ |
| 126 | + -H 'Content-Type: application/json' \ |
| 127 | + --data-binary '{ |
| 128 | + "model": "gpt-3.5-turbo", |
| 129 | + "messages": [ |
| 130 | + { |
| 131 | + "role": "user", |
| 132 | + "content": "What is Meilisearch?" |
| 133 | + } |
| 134 | + ], |
| 135 | + "stream": true |
| 136 | + }' |
| 137 | +``` |
| 138 | + |
| 139 | +## Understanding workspaces |
| 140 | + |
| 141 | +Workspaces allow you to create isolated chat configurations for different use cases: |
| 142 | + |
| 143 | +- **Customer support**: Configure with support-focused prompts |
| 144 | +- **Product search**: Optimize for e-commerce queries |
| 145 | +- **Documentation**: Tune for technical Q&A |
| 146 | + |
| 147 | +Each workspace maintains its own: |
| 148 | +- LLM provider configuration |
| 149 | +- System prompt |
| 150 | +- Access permissions |
| 151 | + |
| 152 | +## Building a chat interface with OpenAI SDK |
| 153 | + |
| 154 | +Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official OpenAI SDK: |
| 155 | + |
| 156 | +<CodeGroup> |
| 157 | + |
| 158 | +```javascript JavaScript |
| 159 | +import OpenAI from 'openai'; |
| 160 | + |
| 161 | +const client = new OpenAI({ |
| 162 | + baseURL: 'http://localhost:7700/chats/my-assistant', |
| 163 | + apiKey: 'YOUR_MEILISEARCH_API_KEY', |
| 164 | +}); |
| 165 | + |
| 166 | +const completion = await client.chat.completions.create({ |
| 167 | + model: 'gpt-3.5-turbo', |
| 168 | + messages: [{ role: 'user', content: 'What is Meilisearch?' }], |
| 169 | + stream: true, |
| 170 | +}); |
| 171 | + |
| 172 | +for await (const chunk of completion) { |
| 173 | + console.log(chunk.choices[0]?.delta?.content || ''); |
| 174 | +} |
| 175 | +``` |
| 176 | +
|
| 177 | +```python Python |
| 178 | +from openai import OpenAI |
| 179 | + |
| 180 | +client = OpenAI( |
| 181 | + base_url="http://localhost:7700/chats/my-assistant", |
| 182 | + api_key="YOUR_MEILISEARCH_API_KEY" |
| 183 | +) |
| 184 | + |
| 185 | +stream = client.chat.completions.create( |
| 186 | + model="gpt-3.5-turbo", |
| 187 | + messages=[{"role": "user", "content": "What is Meilisearch?"}], |
| 188 | + stream=True, |
| 189 | +) |
| 190 | + |
| 191 | +for chunk in stream: |
| 192 | + if chunk.choices[0].delta.content is not None: |
| 193 | + print(chunk.choices[0].delta.content, end="") |
| 194 | +``` |
| 195 | +
|
| 196 | +```typescript TypeScript |
| 197 | +import OpenAI from 'openai'; |
| 198 | + |
| 199 | +const client = new OpenAI({ |
| 200 | + baseURL: 'http://localhost:7700/chats/my-assistant', |
| 201 | + apiKey: 'YOUR_MEILISEARCH_API_KEY', |
| 202 | +}); |
| 203 | + |
| 204 | +const stream = await client.chat.completions.create({ |
| 205 | + model: 'gpt-3.5-turbo', |
| 206 | + messages: [{ role: 'user', content: 'What is Meilisearch?' }], |
| 207 | + stream: true, |
| 208 | +}); |
| 209 | + |
| 210 | +for await (const chunk of stream) { |
| 211 | + const content = chunk.choices[0]?.delta?.content || ''; |
| 212 | + process.stdout.write(content); |
| 213 | +} |
| 214 | +``` |
| 215 | +
|
| 216 | +</CodeGroup> |
| 217 | +
|
| 218 | +## Next steps |
| 219 | +
|
| 220 | +- Explore [advanced chat API features](/reference/api/chats) |
| 221 | +- Learn about [conversational search concepts](/learn/ai_powered_search/conversational_search_with_chat) |
| 222 | +- Review [security best practices](/learn/security/basic_security) |
0 commit comments