Skip to content

Commit c9e37e9

Browse files
committed
Merge branch 'main' into copilot-docs-1124
2 parents c2a6f92 + b754e6d commit c9e37e9

18 files changed

+195
-161
lines changed

api/extension-guides/chat-tutorial.md

Lines changed: 81 additions & 127 deletions
Original file line numberDiff line numberDiff line change
@@ -78,9 +78,9 @@ This code registers a chat participant with the following attributes:
7878

7979
Finally, setting `isSticky: true` will automatically prepend the participant name in the chat input field after the user has started interacting with the participant.
8080

81-
## Step 3: Craft the prompt and select the model
81+
## Step 3: Craft the prompt
8282

83-
Now that the participant is registered, you can start implementing the logic for the code tutor. In the `extension.ts` file, you will define a prompt and select the model for the requests.
83+
Now that the participant is registered, you can start implementing the logic for the code tutor. In the `extension.ts` file, you will define a prompt for the requests.
8484

8585
Crafting a good prompt is the key to getting the best response from your participant. Check out [this article](https://platform.openai.com/docs/guides/prompt-engineering) for tips on prompt engineering.
8686

@@ -97,15 +97,9 @@ The second prompt is more specific and gives the participant a clear direction o
9797
const BASE_PROMPT = 'You are a helpful code tutor. Your job is to teach the user with simple descriptions and sample code of the concept. Respond with a guided overview of the concept in a series of messages. Do not give the user the answer directly, but guide them to find the answer themselves. If the user asks a non-programming question, politely decline to respond.';
9898
```
9999

100-
You also need to select the model for the requests. gpt-4o is recommended since it is fast and high quality.
101-
102-
```ts
103-
const MODEL_SELECTOR: vscode.LanguageModelChatSelector = { vendor: 'copilot', family: 'gpt-4o' }
104-
```
105-
106100
## Step 4: Implement the request handler
107101

108-
Now that the prompt and model are selected, you need to implement the request handler. This is what will process the user's chat request. You will define the request handler, perform logic for processing the request, and return a response to the user.
102+
Now that the prompt is selected, you need to implement the request handler. This is what will process the user's chat request. You will define the request handler, perform logic for processing the request, and return a response to the user.
109103

110104
First, define the handler:
111105

@@ -117,56 +111,31 @@ const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, c
117111
}
118112
```
119113

120-
Within the body of this handler, initialize the prompt and model. Check that the model returned successfully.
121-
122-
```ts
123-
// define a chat handler
124-
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
125-
126-
// initialize the prompt and model
127-
let prompt = BASE_PROMPT;
128-
129-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
114+
Within the body of this handler, initialize the prompt and a `messages` array with the prompt. Then, send in what the user typed in the chat box. You can access this through `request.prompt`.
130115

131-
// make sure the model is available
132-
if (model) {
133-
134-
}
135-
136-
return;
137-
};
138-
```
139-
140-
Initialize a `messages` array with the prompt we crafted in the previous step. Then, send in what the user typed in the chat box. You can access this through `request.prompt`.
141-
142-
Finally, send the request and stream the response to the user.
116+
Send the request using `request.model.sendRequest`, which will send the request using the currently selected model. Finally, stream the response to the user.
143117

144118
```ts
145119
// define a chat handler
146120
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
147121

148-
// initialize the prompt and model
122+
// initialize the prompt
149123
let prompt = BASE_PROMPT;
150124

151-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
125+
// initialize the messages array with the prompt
126+
const messages = [
127+
vscode.LanguageModelChatMessage.User(prompt),
128+
];
152129

153-
// make sure the model is available
154-
if (model) {
155-
// initialize the messages array with the prompt
156-
const messages = [
157-
vscode.LanguageModelChatMessage.User(prompt),
158-
];
130+
// add in the user's message
131+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
159132

160-
// add in the user's message
161-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
133+
// send the request
134+
const chatResponse = await request.model.sendRequest(messages, {}, token);
162135

163-
// send the request
164-
const chatResponse = await model.sendRequest(messages, {}, token);
165-
166-
// stream the response
167-
for await (const fragment of chatResponse.text) {
168-
stream.markdown(fragment);
169-
}
136+
// stream the response
137+
for await (const fragment of chatResponse.text) {
138+
stream.markdown(fragment);
170139
}
171140

172141
return;
@@ -183,28 +152,23 @@ You should further customize your participant by adding an icon for it. This wil
183152
// define a chat handler
184153
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
185154

186-
// initialize the prompt and model
155+
// initialize the prompt
187156
let prompt = BASE_PROMPT;
188157

189-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
190-
191-
// make sure the model is available
192-
if (model) {
193-
// initialize the messages array with the prompt
194-
const messages = [
195-
vscode.LanguageModelChatMessage.User(prompt),
196-
];
158+
// initialize the messages array with the prompt
159+
const messages = [
160+
vscode.LanguageModelChatMessage.User(prompt),
161+
];
197162

198-
// add in the user's message
199-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
163+
// add in the user's message
164+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
200165

201-
// send the request
202-
const chatResponse = await model.sendRequest(messages, {}, token);
166+
// send the request
167+
const chatResponse = await request.model.sendRequest(messages, {}, token);
203168

204-
// stream the response
205-
for await (const fragment of chatResponse.text) {
206-
stream.markdown(fragment);
207-
}
169+
// stream the response
170+
for await (const fragment of chatResponse.text) {
171+
stream.markdown(fragment);
208172
}
209173

210174
return;
@@ -244,43 +208,38 @@ You'll need to retrieve that history and add it to the `messages` array. You wil
244208
// define a chat handler
245209
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
246210

247-
// initialize the prompt and model
211+
// initialize the prompt
248212
let prompt = BASE_PROMPT;
249213

250-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
251-
252-
// make sure the model is available
253-
if (model) {
254-
// initialize the messages array with the prompt
255-
const messages = [
256-
vscode.LanguageModelChatMessage.User(prompt),
257-
];
258-
259-
// get all the previous participant messages
260-
const previousMessages = context.history.filter(
261-
(h) => h instanceof vscode.ChatResponseTurn
262-
);
263-
264-
// add the previous messages to the messages array
265-
previousMessages.forEach((m) => {
266-
let fullMessage = '';
267-
m.response.forEach((r) => {
268-
const mdPart = r as vscode.ChatResponseMarkdownPart;
269-
fullMessage += mdPart.value.value;
270-
});
271-
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
214+
// initialize the messages array with the prompt
215+
const messages = [
216+
vscode.LanguageModelChatMessage.User(prompt),
217+
];
218+
219+
// get all the previous participant messages
220+
const previousMessages = context.history.filter(
221+
(h) => h instanceof vscode.ChatResponseTurn
222+
);
223+
224+
// add the previous messages to the messages array
225+
previousMessages.forEach((m) => {
226+
let fullMessage = '';
227+
m.response.forEach((r) => {
228+
const mdPart = r as vscode.ChatResponseMarkdownPart;
229+
fullMessage += mdPart.value.value;
272230
});
231+
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
232+
});
273233

274-
// add in the user's message
275-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
234+
// add in the user's message
235+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
276236

277-
// send the request
278-
const chatResponse = await model.sendRequest(messages, {}, token);
237+
// send the request
238+
const chatResponse = await request.model.sendRequest(messages, {}, token);
279239

280-
// stream the response
281-
for await (const fragment of chatResponse.text) {
282-
stream.markdown(fragment);
283-
}
240+
// stream the response
241+
for await (const fragment of chatResponse.text) {
242+
stream.markdown(fragment);
284243
}
285244

286245
return;
@@ -333,47 +292,42 @@ If the command is referenced, update the prompt to the newly created `EXERCISES_
333292
// define a chat handler
334293
const handler: vscode.ChatRequestHandler = async (request: vscode.ChatRequest, context: vscode.ChatContext, stream: vscode.ChatResponseStream, token: vscode.CancellationToken) => {
335294

336-
// initialize the prompt and model
295+
// initialize the prompt
337296
let prompt = BASE_PROMPT;
338297

339298
if (request.command === 'exercise') {
340299
prompt = EXERCISES_PROMPT;
341300
}
342301

343-
const [model] = await vscode.lm.selectChatModels(MODEL_SELECTOR);
344-
345-
// make sure the model is available
346-
if (model) {
347-
// initialize the messages array with the prompt
348-
const messages = [
349-
vscode.LanguageModelChatMessage.User(prompt),
350-
];
351-
352-
// get all the previous participant messages
353-
const previousMessages = context.history.filter(
354-
(h) => h instanceof vscode.ChatResponseTurn
355-
);
356-
357-
// add the previous messages to the messages array
358-
previousMessages.forEach((m) => {
359-
let fullMessage = '';
360-
m.response.forEach((r) => {
361-
const mdPart = r as vscode.ChatResponseMarkdownPart;
362-
fullMessage += mdPart.value.value;
363-
});
364-
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
302+
// initialize the messages array with the prompt
303+
const messages = [
304+
vscode.LanguageModelChatMessage.User(prompt),
305+
];
306+
307+
// get all the previous participant messages
308+
const previousMessages = context.history.filter(
309+
(h) => h instanceof vscode.ChatResponseTurn
310+
);
311+
312+
// add the previous messages to the messages array
313+
previousMessages.forEach((m) => {
314+
let fullMessage = '';
315+
m.response.forEach((r) => {
316+
const mdPart = r as vscode.ChatResponseMarkdownPart;
317+
fullMessage += mdPart.value.value;
365318
});
319+
messages.push(vscode.LanguageModelChatMessage.Assistant(fullMessage));
320+
});
366321

367-
// add in the user's message
368-
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
322+
// add in the user's message
323+
messages.push(vscode.LanguageModelChatMessage.User(request.prompt));
369324

370-
// send the request
371-
const chatResponse = await model.sendRequest(messages, {}, token);
325+
// send the request
326+
const chatResponse = await request.model.sendRequest(messages, {}, token);
372327

373-
// stream the response
374-
for await (const fragment of chatResponse.text) {
375-
stream.markdown(fragment);
376-
}
328+
// stream the response
329+
for await (const fragment of chatResponse.text) {
330+
stream.markdown(fragment);
377331
}
378332

379333
return;

blogs/2024/11/12/blog-video-demo.mp4

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
version https://git-lfs.github.com/spec/v1
2+
oid sha256:b9607b27b392426e562f816d5ae084fae3f9263c7fe024aedaf74fff263362f1
3+
size 16095927

blogs/2024/11/12/changes.png

Lines changed: 3 additions & 0 deletions
Loading

blogs/2024/11/12/copilot-edits.png

Lines changed: 3 additions & 0 deletions
Loading
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
---
2+
Order: 89
3+
TOCTitle: Introducing Copilot Edits
4+
PageTitle: Introducing Copilot Edits
5+
MetaDescription: Copilot Edits allows you to get to the changes you need in your workspace, across multiple files, using a UI designed for fast iteration. You can specify a set of files to be edited, and then use natural language to simply ask Copilot what you need. You stay in the flow of your code while reviewing the suggested changes, accepting what works, and iterating with follow-up asks.
6+
Date: 2024-11-12
7+
Author: Isidor Nikolic
8+
---
9+
10+
# Introducing Copilot Edits (preview)
11+
12+
November 12th, 2024 by [Isidor Nikolic](https://x.com/isidorn)
13+
14+
Until recently, you could use GitHub Copilot in VS Code in two separate ways. You could modify code inside the editor using completions or Inline Chat. Or you could use Copilot to ask questions about your code in the Chat view. Copilot Edits, a preview feature, is a brand new way of using Copilot in VS Code. It combines the best of Chat and Inline Chat: the conversational flow and the ability to make inline changes across of set of files that you manage. And it just works.
15+
16+
<video src="blog-video-demo.mp4" title="Copilot Edits video" autoplay muted controls></video>
17+
18+
## Designed for iteration across multiple files
19+
20+
In Copilot Edits you specify a set of files to be edited, and then use natural language to ask Copilot what you need. Copilot Edits makes inline changes in your workspace, across multiple files, using an UI designed for fast iteration. You stay in the flow of your code while reviewing the suggested changes, accepting what works, and iterating with follow-up asks.
21+
22+
![Screenshot of the Copilot edits, and the proposed inline changes](copilot-edits.png)
23+
24+
Copilot Edits works because it puts you in control, from setting the right context to accepting changes, and not because it relies on an advanced model that never makes a mistake. And the experience is iterative: when the model gets it wrong, you can review changes across multiple files, accept good ones and iterate until, together with Copilot, you arrive at the right solution. When accepting changes, you can run the code to verify the changes and, when needed, Undo in Copilot Edits to get back to a previous working state.
25+
26+
## Stay in control
27+
28+
There is a new UI concept – the Working Set - that puts you in control and allows you to define on what files the edits need to be applied. You can also add files to the working set by dragging and dropping files or editor tabs, or by pressing `#` to explicitly add them. Copilot Edits automatically adds your active editors across editor groups to the Working Set.
29+
30+
![Screenshot of the Working Set, showing the user adding index.js](working-set.png)
31+
32+
Working Sets, together with the Undo and Redo functionality, gives you precise control over changes and allows you to decide exactly where and how to apply them. Copilot Edits shows the generated edits in-place right in your code and provides you with a code review flow, where you can accept or discard each of the AI-generated edits. Copilot Edits will not make changes outside of the Working Set – the only exception being when it proposes to create a new file.
33+
34+
![Screenshot of the inline changes, showing the Accept / Discard widget](changes.png)
35+
36+
Copilot Edits is in the Secondary Side Bar (default on the right) so that you can interact with views in the Primary Side Bar, such as the Explorer, Debug, or Source Control view, while you’re reviewing proposed changes. For example, you can have unit tests running in the [Testing](https://code.visualstudio.com/docs/editor/testing) view on the left, while using the Copilot Edits view on the right, so that in every iteration you can verify if the changes Copilot Edits proposed are passing the unit tests.
37+
38+
Using your [voice](https://code.visualstudio.com/docs/editor/voice) is a natural experience while using Copilot Edits. Just talking to Copilot makes the back-and-forth smooth and conversational. It almost feels like interacting with a colleague that is an area expert, using the same kind of iterative flow that you would use in real life pair programming.
39+
40+
Copilot Edits makes code editing with AI accessible to users with varying skills. As a product manager at Microsoft, I can quickly iterate on early ideas with Copilot Edits without much coding. For my VS Code engineering colleagues, Copilot Edits helps them to easily create complex refactorings across multiple files in the [vscode repo](https://github.com/microsoft/vscode). For example, one team member who had zero Swift experience, created a custom macOS app from scratch using Copilot Edits – after each iteration they ran the app, identified what was not working, and gave Copilot Edits appropriate follow-up instructions.
41+
42+
## Under the covers
43+
44+
Copilot Edits leverages a dual-model architecture to enhance editing efficiency and accuracy. First, a foundation language model considers a full context of the Edits session to generate initial edit suggestions. You can choose the foundation language model that you prefer between: GPT-4o, o1-preview, o1-mini, Claude 3.5 Sonnet, and Gemini 1.5 Pro. For a performant experience, the team developed a speculative decoding endpoint, optimized for fast application of changes in files. The proposed edits from the foundation model are sent to the speculative decoding endpoint that will then propose those changes inline in the editor. The speculative decoding endpoint is faster than a regular model, but the team knows it can be even faster and is working on improving this, so stay tuned.
45+
46+
## Available today
47+
48+
Copilot Edits is in preview and available to all [GitHub Copilot]( https://marketplace.visualstudio.com/items?itemName=GitHub.copilot) users today! The feedback that you provided in the past [#1](https://github.com/microsoft/vscode-copilot-release/issues/95) and [#2](https://github.com/microsoft/vscode-copilot-release/issues/1098), was instrumental in shipping this feature, so a big thank you!
49+
50+
For a detailed overview of Copilot Edits please read the [official docs](https://code.visualstudio.com/docs/copilot/copilot-edits).
51+
52+
Next, the team plans to improve the performance of the apply changes speculative decoding endpoint, support transitions into Copilot Edits from Copilot Chat by preserving context, suggest files to the Working Set, and to allow Undo of suggested chunks.
53+
If you want to be among the first to get your hands on these improvements make sure to use [VS Code Insiders]( https://code.visualstudio.com/insiders/) and the pre-release version of the [GitHub Copilot Chat]( https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat) extension. To help improve the feature, please file issues in [our repo](https://github.com/microsoft/vscode-copilot-release).
54+
55+
Ultimately, it’s not just about the Copilot Edits itself but what it helps you build.
56+
57+
Happy coding!
58+
59+
Isidor

blogs/2024/11/12/working-set.png

Lines changed: 3 additions & 0 deletions
Loading

0 commit comments

Comments
 (0)