Skip to content

Commit fa12457

Browse files
kukicadoMaedahBatoolchris-sev
authored
Hackathon - Cody for VS Code (#763)
<!-- Explain the changes introduced in your PR --> ## Pull Request approval You will need to get your PR approved by at least one member of the Sourcegraph team. For reviews of docs formatting, styles, and component usage, please tag the docs team via the #docs Slack channel. --------- Co-authored-by: Maedah Batool <[email protected]> Co-authored-by: Chris Sev <[email protected]>
1 parent 62d989d commit fa12457

File tree

1 file changed

+22
-21
lines changed

1 file changed

+22
-21
lines changed

docs/cody/clients/install-vscode.mdx

Lines changed: 22 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The Cody extension by Sourcegraph enhances your coding experience in VS Code by
1515

1616
## Install the VS Code extension
1717

18-
Follow these steps to install the Cody AI extension for VS Code:
18+
You can install VS Code directly from the [VS Code extension marketplace listing](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai) or by following these steps directly within VS Code:
1919

2020
- Open VS Code editor on your local machine
2121
- Click the **Extensions** icon in the Activity Bar on the side of VS Code, or use the keyboard shortcut `Cmd+Shift+X` (macOS) or `Ctrl+Shift+X` (Windows/Linux)
@@ -136,7 +136,7 @@ A chat history icon at the top of your chat input window allows you to navigate
136136

137137
### Changing LLM model for chat
138138

139-
<Callout type="note"> You need to be a Cody Free or Pro user to have multi-model selection capability. Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.</Callout>
139+
<Callout type="note"> You need to be a Cody Free or Pro user to have multi-model selection capability. You can view which LLMs you have access to on our [supported LLMs page](/cody/capabilities/supported-models). Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.</Callout>
140140

141141
For Chat:
142142

@@ -162,7 +162,7 @@ The `@-file` also supports line numbers to query the context of large files. You
162162

163163
When you `@-mention` files to add to Cody’s context window, the file lookup takes `files.exclude`, `search.exclude`, and `.gitgnore` files into account. This makes the file search faster as a result up to 100ms.
164164

165-
Moreover, when you `@-mention` files, Cody will track the number of characters in those files against the context window limit of the selected chat model. As you `@-mention` multiple files, Cody will calculate how many tokens of the context window remain. When the remaining context window size becomes too small, you get **File too large** errors for further more `@-mention` files.
165+
Moreover, when you `@-mention` files, Cody will track the number of characters in those files against the context window limit of the selected chat model. As you `@-mention` multiple files, Cody will calculate how many tokens of the context window remain. When the remaining context window size becomes too small, you'll receive **File too large** errors when attempting to `@-mention` additional files.
166166

167167
Cody defaults to showing @-mention context chips for all the context it intends to use. When you open a new chat, Cody will show context chips for your current repository and current file (or file selection if you have code highlighted).
168168

@@ -178,7 +178,7 @@ When you have both a repository and files @-mentioned, Cody will search the repo
178178

179179
### @-mention context providers with OpenCtx
180180

181-
<Callout type="info">OpenCtx context providers is in Experimental stage for all Cody users. Enterprise users can also use this but with limited support. If you have feedback or questions, please visit our [support forum](https://community.sourcegraph.com/c/openctx/10).</Callout>
181+
<Callout type="info">OpenCtx context providers are in Experimental stage for all Cody users. Enterprise users can also use this but with limited support. If you have feedback or questions, please visit our [support forum](https://community.sourcegraph.com/c/openctx/10).</Callout>
182182

183183
[OpenCtx](https://openctx.org/) is an open standard for bringing contextual info about code into your dev tools. Cody Free and Pro users can use OpenCtx providers to fetch and use context from the following sources:
184184

@@ -209,6 +209,8 @@ If Cody's answer isn't helpful, you can try asking again with different context:
209209
- Current file only: Re-run the prompt again using just the current file as context.
210210
- Add context: Provides @-mention context options to improve the response by explicitly including files, symbols, remote repositories, or even web pages (by URL).
211211

212+
![re-run-with-context](https://storage.googleapis.com/sourcegraph-assets/Docs/re-run-with-context.png)
213+
212214
## Context fetching mechanism
213215

214216
VS Code users on the Free or Pro plan use [local context](/cody/core-concepts/context#context-selection).
@@ -265,9 +267,11 @@ For customization and advanced use cases, you can create **Custom Commands** tai
265267

266268
Cody lets you dynamically insert code from chat into your files with **Smart Apply**. Every time Cody provides you with a code suggestion, you can click the **Apply** button. Cody will then analyze your open code file, find where that relevant code should live, and add a diff. For chat messages where Cody provides multiple code suggestions, you can apply each in sequence to go from chat suggestions to written code.
267269

268-
Smart Apply also supports the executing of commands in the terminal. When you ask Cody a question related to terminal commands, you can now execute the suggestion in your terminal by clicking the `Execute` button in the chat window.
270+
![smart-apply-code](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-102024.png)
271+
272+
Smart Apply also supports executing commands in the terminal. When you ask Cody a question related to terminal commands, you can execute the suggestion in your terminal by clicking the `Execute` button in the chat window.
269273

270-
![smart-apply](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-102024.png)
274+
![smart-apply-execute](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-102024.png)
271275

272276
## Keyboard shortcuts
273277

@@ -293,9 +297,9 @@ Cody also works with Cursor, Gitpod, IDX, and other similar VS Code forks. To ac
293297

294298
## Supported LLM models
295299

296-
Claude Sonnet 3.5 is the default LLM model for inline edits and commands. If you've used Claude 3 Sonnet for inline edit or commands before, remember to manually update the model. The default model change only affects new users.
300+
Claude 3.5 Sonnet is the default LLM model for inline edits and prompts. If you've used a different or older LLM model for inline edits or commands before, remember to manually change your model to Claude 3.5 Sonnet. Default model changes only affect new users.
297301

298-
Users on Cody **Free** and **Pro** can choose from a list of supported LLM models for Chat and Commands.
302+
Users on Cody **Free** and **Pro** can choose from a list of [supported LLM models](/cody/capabilities/supported-models) for chat.
299303

300304
![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/llm-dropdown-options-102024.png)
301305

@@ -316,21 +320,18 @@ You also get additional capabilities like BYOLLM (Bring Your Own LLM), supportin
316320
To get autocomplete suggestions from Ollama locally, follow these steps:
317321

318322
- Install and run [Ollama](https://ollama.ai/)
319-
- Download one of the supported local models:
320-
- `ollama pull deepseek-coder:6.7b-base-q4_K_M` for [deepseek-coder](https://ollama.ai/library/deepseek-coder)
321-
- `ollama pull codellama:7b-code` for [codellama](https://ollama.ai/library/codellama)
322-
- `ollama pull starcoder2:7b` for [codellama](https://ollama.ai/library/starcoder2)
323+
- Download one of the supported local models using `pull`. The `pull` command is used to download models from the Ollama library to your local machine.
324+
- `ollama pull deepseek-coder-v2` for [deepseek-coder](https://ollama.com/library/deepseek-coder-v2)
325+
- `ollama pull codellama:13b` for [codellama](https://ollama.ai/library/codellama)
326+
- `ollama pull starcoder2:7b` for [starcoder2](https://ollama.ai/library/starcoder2)
323327
- Update Cody's VS Code settings to use the `experimental-ollama` autocomplete provider and configure the right model:
324328

325329
```json
326-
327-
{
328-
"cody.autocomplete.advanced.provider": "experimental-ollama",
329-
"cody.autocomplete.experimental.ollamaOptions": {
330-
"url": "http://localhost:11434",
331-
"model": "deepseek-coder:6.7b-base-q4_K_M"
332-
}
333-
}
330+
"cody.autocomplete.advanced.provider": "experimental-ollama",
331+
"cody.autocomplete.experimental.ollamaOptions": {
332+
"url": "http://localhost:11434",
333+
"model": "deepseek-coder-v2"
334+
}
334335
```
335336

336337
- Confirm Cody uses Ollama by looking at the Cody output channel or the autocomplete trace view (in the command palette)
@@ -348,7 +349,7 @@ To generate chat and commands with Ollama locally, follow these steps:
348349
- Download [Ollama](https://ollama.com/download)
349350
- Start Ollama (make sure the Ollama logo is showing up in your menu bar)
350351
- Select a chat model (model that includes instruct or chat, for example, [gemma:7b-instruct-q4_K_M](https://ollama.com/library/gemma:7b-instruct-q4_K_M)) from the [Ollama Library](https://ollama.com/library)
351-
- Pull the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
352+
- Pull (download) the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
352353
- Once the chat model is downloaded successfully, open Cody in VS Code
353354
- Open a new Cody chat
354355
- In the new chat panel, you should see the chat model you've pulled in the dropdown list

0 commit comments

Comments
 (0)