You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- Explain the changes introduced in your PR -->
## Pull Request approval
You will need to get your PR approved by at least one member of the
Sourcegraph team. For reviews of docs formatting, styles, and component
usage, please tag the docs team via the #docs Slack channel.
---------
Co-authored-by: Maedah Batool <[email protected]>
Co-authored-by: Chris Sev <[email protected]>
Copy file name to clipboardExpand all lines: docs/cody/clients/install-vscode.mdx
+22-21Lines changed: 22 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ The Cody extension by Sourcegraph enhances your coding experience in VS Code by
15
15
16
16
## Install the VS Code extension
17
17
18
-
Follow these steps to install the Cody AI extension for VS Code:
18
+
You can install VS Code directly from the [VS Code extension marketplace listing](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai) or by following these steps directly within VS Code:
19
19
20
20
- Open VS Code editor on your local machine
21
21
- Click the **Extensions** icon in the Activity Bar on the side of VS Code, or use the keyboard shortcut `Cmd+Shift+X` (macOS) or `Ctrl+Shift+X` (Windows/Linux)
@@ -136,7 +136,7 @@ A chat history icon at the top of your chat input window allows you to navigate
136
136
137
137
### Changing LLM model for chat
138
138
139
-
<Callouttype="note"> You need to be a Cody Free or Pro user to have multi-model selection capability. Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.</Callout>
139
+
<Callouttype="note"> You need to be a Cody Free or Pro user to have multi-model selection capability. You can view which LLMs you have access to on our [supported LLMs page](/cody/capabilities/supported-models). Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.</Callout>
140
140
141
141
For Chat:
142
142
@@ -162,7 +162,7 @@ The `@-file` also supports line numbers to query the context of large files. You
162
162
163
163
When you `@-mention` files to add to Cody’s context window, the file lookup takes `files.exclude`, `search.exclude`, and `.gitgnore` files into account. This makes the file search faster as a result up to 100ms.
164
164
165
-
Moreover, when you `@-mention` files, Cody will track the number of characters in those files against the context window limit of the selected chat model. As you `@-mention` multiple files, Cody will calculate how many tokens of the context window remain. When the remaining context window size becomes too small, you get**File too large** errors for further more`@-mention` files.
165
+
Moreover, when you `@-mention` files, Cody will track the number of characters in those files against the context window limit of the selected chat model. As you `@-mention` multiple files, Cody will calculate how many tokens of the context window remain. When the remaining context window size becomes too small, you'll receive**File too large** errors when attempting to`@-mention` additional files.
166
166
167
167
Cody defaults to showing @-mention context chips for all the context it intends to use. When you open a new chat, Cody will show context chips for your current repository and current file (or file selection if you have code highlighted).
168
168
@@ -178,7 +178,7 @@ When you have both a repository and files @-mentioned, Cody will search the repo
178
178
179
179
### @-mention context providers with OpenCtx
180
180
181
-
<Callouttype="info">OpenCtx context providers is in Experimental stage for all Cody users. Enterprise users can also use this but with limited support. If you have feedback or questions, please visit our [support forum](https://community.sourcegraph.com/c/openctx/10).</Callout>
181
+
<Callouttype="info">OpenCtx context providers are in Experimental stage for all Cody users. Enterprise users can also use this but with limited support. If you have feedback or questions, please visit our [support forum](https://community.sourcegraph.com/c/openctx/10).</Callout>
182
182
183
183
[OpenCtx](https://openctx.org/) is an open standard for bringing contextual info about code into your dev tools. Cody Free and Pro users can use OpenCtx providers to fetch and use context from the following sources:
184
184
@@ -209,6 +209,8 @@ If Cody's answer isn't helpful, you can try asking again with different context:
209
209
- Current file only: Re-run the prompt again using just the current file as context.
210
210
- Add context: Provides @-mention context options to improve the response by explicitly including files, symbols, remote repositories, or even web pages (by URL).
VS Code users on the Free or Pro plan use [local context](/cody/core-concepts/context#context-selection).
@@ -265,9 +267,11 @@ For customization and advanced use cases, you can create **Custom Commands** tai
265
267
266
268
Cody lets you dynamically insert code from chat into your files with **Smart Apply**. Every time Cody provides you with a code suggestion, you can click the **Apply** button. Cody will then analyze your open code file, find where that relevant code should live, and add a diff. For chat messages where Cody provides multiple code suggestions, you can apply each in sequence to go from chat suggestions to written code.
267
269
268
-
Smart Apply also supports the executing of commands in the terminal. When you ask Cody a question related to terminal commands, you can now execute the suggestion in your terminal by clicking the `Execute` button in the chat window.
Smart Apply also supports executing commands in the terminal. When you ask Cody a question related to terminal commands, you can execute the suggestion in your terminal by clicking the `Execute` button in the chat window.
@@ -293,9 +297,9 @@ Cody also works with Cursor, Gitpod, IDX, and other similar VS Code forks. To ac
293
297
294
298
## Supported LLM models
295
299
296
-
Claude Sonnet 3.5 is the default LLM model for inline edits and commands. If you've used Claude 3 Sonnet for inline edit or commands before, remember to manually update the model. The default model change only affects new users.
300
+
Claude 3.5 Sonnet is the default LLM model for inline edits and prompts. If you've used a different or older LLM model for inline edits or commands before, remember to manually change your model to Claude 3.5 Sonnet. Default model changes only affect new users.
297
301
298
-
Users on Cody **Free** and **Pro** can choose from a list of supported LLM models for Chat and Commands.
302
+
Users on Cody **Free** and **Pro** can choose from a list of [supported LLM models](/cody/capabilities/supported-models) for chat.
@@ -316,21 +320,18 @@ You also get additional capabilities like BYOLLM (Bring Your Own LLM), supportin
316
320
To get autocomplete suggestions from Ollama locally, follow these steps:
317
321
318
322
- Install and run [Ollama](https://ollama.ai/)
319
-
- Download one of the supported local models:
320
-
-`ollama pull deepseek-coder:6.7b-base-q4_K_M` for [deepseek-coder](https://ollama.ai/library/deepseek-coder)
321
-
-`ollama pull codellama:7b-code` for [codellama](https://ollama.ai/library/codellama)
322
-
-`ollama pull starcoder2:7b` for [codellama](https://ollama.ai/library/starcoder2)
323
+
- Download one of the supported local models using `pull`. The `pull` command is used to download models from the Ollama library to your local machine.
324
+
-`ollama pull deepseek-coder-v2` for [deepseek-coder](https://ollama.com/library/deepseek-coder-v2)
325
+
-`ollama pull codellama:13b` for [codellama](https://ollama.ai/library/codellama)
326
+
-`ollama pull starcoder2:7b` for [starcoder2](https://ollama.ai/library/starcoder2)
323
327
- Update Cody's VS Code settings to use the `experimental-ollama` autocomplete provider and configure the right model:
- Confirm Cody uses Ollama by looking at the Cody output channel or the autocomplete trace view (in the command palette)
@@ -348,7 +349,7 @@ To generate chat and commands with Ollama locally, follow these steps:
348
349
- Download [Ollama](https://ollama.com/download)
349
350
- Start Ollama (make sure the Ollama logo is showing up in your menu bar)
350
351
- Select a chat model (model that includes instruct or chat, for example, [gemma:7b-instruct-q4_K_M](https://ollama.com/library/gemma:7b-instruct-q4_K_M)) from the [Ollama Library](https://ollama.com/library)
351
-
- Pull the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
352
+
- Pull (download) the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
352
353
- Once the chat model is downloaded successfully, open Cody in VS Code
353
354
- Open a new Cody chat
354
355
- In the new chat panel, you should see the chat model you've pulled in the dropdown list
0 commit comments