Skip to content

Commit 5f26595

Browse files
authored
Merge pull request #7754 from olguzzar/update-chat-tutorial
Update Chat tutorial to use request.model
2 parents a374d92 + 06fc763 commit 5f26595

File tree

1 file changed

+81
-127
lines changed

1 file changed

+81
-127
lines changed

api/extension-guides/chat-tutorial.md

Lines changed: 81 additions & 127 deletions
Original file line numberDiff line numberDiff line change
@@ -78,9 +78,9 @@ This code registers a chat participant with the following attributes:
7878

7979
Finally, setting `isSticky: true` will automatically prepend the participant name in the chat input field after the user has started interacting with the participant.
8080

81-
## Step 3: Craft the prompt and select the model
81+
## Step 3: Craft the prompt
8282

83-
Now that the participant is registered, you can start implementing the logic for the code tutor. In the `extension.ts` file, you will define a prompt and select the model for the requests.
83+
Now that the participant is registered, you can start implementing the logic for the code tutor. In the `extension.ts` file, you will define a prompt for the requests.
8484

8585
Crafting a good prompt is the key to getting the best response from your participant. Check out [this article](https://platform.openai.com/docs/guides/prompt-engineering) for tips on prompt engineering.
8686

@@ -97,15 +97,9 @@ The second prompt is more specific and gives the participant a clear direction o
9797
const BASE_PROMPT = 'You are a helpful code tutor. Your job is to teach the user with simple descriptions and sample code of the concept. Respond with a guided overview of the concept in a series of messages. Do not give the user the answer directly, but guide them to find the answer themselves. If the user asks a non-programming question, politely decline to respond.';
9898
```
9999

100-
You also need to select the model for the requests. gpt-4o is recommended since it is fast and high quality.
101-
102-
```ts
103-
const MODEL_SELECTOR: vscode.LanguageModelChatSelector = { vendor: 'copilot', family: 'gpt-4o' }
104-
```
105-
106100
## Step 4: Implement the request handler
107101

108-
Now that the prompt and model are selected, you need to implement the request handler. This is what will process the user's chat request. You will define the request handler, perform logic for processing the request, and return a response to the user.
102+
Now that the prompt is selected, you need to implement the request handler. This is what will process the user's chat request. You will define the request handler, perform logic for processing the request, and return a response to the user.
109103

110104
First, define the handler:
111105

@@ -117,56 +111,31 @@ const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, c
117111
}
118112
```
119113

120-
Within the body of this handler, initialize the prompt and model. Check that the model returned successfully.
121-
122-
```ts
123-
// define a chat handler
124-
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
125-
126-
// initialize the prompt and model
127-
let prompt = BASE_PROMPT;
128-
129-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
114+
Within the body of this handler, initialize the prompt and a `messages` array with the prompt. Then, send in what the user typed in the chat box. You can access this through `request.prompt`.
130115

131-
// make sure the model is available
132-
if (model) {
133-
134-
}
135-
136-
return;
137-
};
138-
```
139-
140-
Initialize a `messages` array with the prompt we crafted in the previous step. Then, send in what the user typed in the chat box. You can access this through `request.prompt`.
141-
142-
Finally, send the request and stream the response to the user.
116+
Send the request using `request.model.sendRequest`, which will send the request using the currently selected model. Finally, stream the response to the user.
143117

144118
```ts
145119
// define a chat handler
146120
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
147121

148-
// initialize the prompt and model
122+
// initialize the prompt
149123
let prompt = BASE_PROMPT;
150124

151-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
125+
// initialize the messages array with the prompt
126+
const messages = [
127+
vscode.LanguageModelChatMessage.User(prompt),
128+
];
152129

153-
// make sure the model is available
154-
if (model) {
155-
// initialize the messages array with the prompt
156-
const messages = [
157-
vscode.LanguageModelChatMessage.User(prompt),
158-
];
130+
// add in the user's message
131+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
159132

160-
// add in the user's message
161-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
133+
// send the request
134+
const chatResponse = await request.model.sendRequest(messages, {}, token);
162135

163-
// send the request
164-
const chatResponse = await model.sendRequest(messages, {}, token);
165-
166-
// stream the response
167-
for await (const fragment of chatResponse.text) {
168-
stream.markdown(fragment);
169-
}
136+
// stream the response
137+
for await (const fragment of chatResponse.text) {
138+
stream.markdown(fragment);
170139
}
171140

172141
return;
@@ -183,28 +152,23 @@ You should further customize your participant by adding an icon for it. This wil
183152
// define a chat handler
184153
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
185154

186-
// initialize the prompt and model
155+
// initialize the prompt
187156
let prompt = BASE_PROMPT;
188157

189-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
190-
191-
// make sure the model is available
192-
if (model) {
193-
// initialize the messages array with the prompt
194-
const messages = [
195-
vscode.LanguageModelChatMessage.User(prompt),
196-
];
158+
// initialize the messages array with the prompt
159+
const messages = [
160+
vscode.LanguageModelChatMessage.User(prompt),
161+
];
197162

198-
// add in the user's message
199-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
163+
// add in the user's message
164+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
200165

201-
// send the request
202-
const chatResponse = await model.sendRequest(messages, {}, token);
166+
// send the request
167+
const chatResponse = await request.model.sendRequest(messages, {}, token);
203168

204-
// stream the response
205-
for await (const fragment of chatResponse.text) {
206-
stream.markdown(fragment);
207-
}
169+
// stream the response
170+
for await (const fragment of chatResponse.text) {
171+
stream.markdown(fragment);
208172
}
209173

210174
return;
@@ -244,43 +208,38 @@ You'll need to retrieve that history and add it to the `messages` array. You wil
244208
// define a chat handler
245209
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
246210

247-
// initialize the prompt and model
211+
// initialize the prompt
248212
let prompt = BASE_PROMPT;
249213

250-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
251-
252-
// make sure the model is available
253-
if (model) {
254-
// initialize the messages array with the prompt
255-
const messages = [
256-
vscode.LanguageModelChatMessage.User(prompt),
257-
];
258-
259-
// get all the previous participant messages
260-
const previousMessages = context.history.filter(
261-
(h) => h instanceof vscode.ChatResponseTurn
262-
);
263-
264-
// add the previous messages to the messages array
265-
previousMessages.forEach((m) => {
266-
let fullMessage = '';
267-
m.response.forEach((r) => {
268-
const mdPart = r as vscode.ChatResponseMarkdownPart;
269-
fullMessage += mdPart.value.value;
270-
});
271-
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
214+
// initialize the messages array with the prompt
215+
const messages = [
216+
vscode.LanguageModelChatMessage.User(prompt),
217+
];
218+
219+
// get all the previous participant messages
220+
const previousMessages = context.history.filter(
221+
(h) => h instanceof vscode.ChatResponseTurn
222+
);
223+
224+
// add the previous messages to the messages array
225+
previousMessages.forEach((m) => {
226+
let fullMessage = '';
227+
m.response.forEach((r) => {
228+
const mdPart = r as vscode.ChatResponseMarkdownPart;
229+
fullMessage += mdPart.value.value;
272230
});
231+
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
232+
});
273233

274-
// add in the user's message
275-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
234+
// add in the user's message
235+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
276236

277-
// send the request
278-
const chatResponse = await model.sendRequest(messages, {}, token);
237+
// send the request
238+
const chatResponse = await request.model.sendRequest(messages, {}, token);
279239

280-
// stream the response
281-
for await (const fragment of chatResponse.text) {
282-
stream.markdown(fragment);
283-
}
240+
// stream the response
241+
for await (const fragment of chatResponse.text) {
242+
stream.markdown(fragment);
284243
}
285244

286245
return;
@@ -333,47 +292,42 @@ If the command is referenced, update the prompt to the newly created `EXERCISES_
333292
// define a chat handler
334293
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
335294

336-
// initialize the prompt and model
295+
// initialize the prompt
337296
let prompt = BASE_PROMPT;
338297

339298
if (request.command === 'exercise') {
340299
prompt = EXERCISES_PROMPT;
341300
}
342301

343-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
344-
345-
// make sure the model is available
346-
if (model) {
347-
// initialize the messages array with the prompt
348-
const messages = [
349-
vscode.LanguageModelChatMessage.User(prompt),
350-
];
351-
352-
// get all the previous participant messages
353-
const previousMessages = context.history.filter(
354-
(h) => h instanceof vscode.ChatResponseTurn
355-
);
356-
357-
// add the previous messages to the messages array
358-
previousMessages.forEach((m) => {
359-
let fullMessage = '';
360-
m.response.forEach((r) => {
361-
const mdPart = r as vscode.ChatResponseMarkdownPart;
362-
fullMessage += mdPart.value.value;
363-
});
364-
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
302+
// initialize the messages array with the prompt
303+
const messages = [
304+
vscode.LanguageModelChatMessage.User(prompt),
305+
];
306+
307+
// get all the previous participant messages
308+
const previousMessages = context.history.filter(
309+
(h) => h instanceof vscode.ChatResponseTurn
310+
);
311+
312+
// add the previous messages to the messages array
313+
previousMessages.forEach((m) => {
314+
let fullMessage = '';
315+
m.response.forEach((r) => {
316+
const mdPart = r as vscode.ChatResponseMarkdownPart;
317+
fullMessage += mdPart.value.value;
365318
});
319+
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
320+
});
366321

367-
// add in the user's message
368-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
322+
// add in the user's message
323+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
369324

370-
// send the request
371-
const chatResponse = await model.sendRequest(messages, {}, token);
325+
// send the request
326+
const chatResponse = await request.model.sendRequest(messages, {}, token);
372327

373-
// stream the response
374-
for await (const fragment of chatResponse.text) {
375-
stream.markdown(fragment);
376-
}
328+
// stream the response
329+
for await (const fragment of chatResponse.text) {
330+
stream.markdown(fragment);
377331
}
378332

379333
return;

0 commit comments

Comments
 (0)