Skip to content
43 changes: 22 additions & 21 deletions docs/cody/clients/install-vscode.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The Cody extension by Sourcegraph enhances your coding experience in VS Code by

## Install the VS Code extension

Follow these steps to install the Cody AI extension for VS Code:
You can install VS Code directly from the [VS Code extension marketplace listing](https://marketplace.visualstudio.com/items?itemName=sourcegraph.cody-ai) or by following these steps directly within VS Code:

- Open VS Code editor on your local machine
- Click the **Extensions** icon in the Activity Bar on the side of VS Code, or use the keyboard shortcut `Cmd+Shift+X` (macOS) or `Ctrl+Shift+X` (Windows/Linux)
Expand Down Expand Up @@ -136,7 +136,7 @@ A chat history icon at the top of your chat input window allows you to navigate

### Changing LLM model for chat

<Callout type="note"> You need to be a Cody Free or Pro user to have multi-model selection capability. Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.</Callout>
<Callout type="note"> You need to be a Cody Free or Pro user to have multi-model selection capability. You can view which LLMs you have access to on our [supported LLMs page](/cody/capabilities/supported-models). Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.</Callout>

For Chat:

Expand All @@ -162,7 +162,7 @@ The `@-file` also supports line numbers to query the context of large files. You

When you `@-mention` files to add to Cody’s context window, the file lookup takes `files.exclude`, `search.exclude`, and `.gitgnore` files into account. This makes the file search faster as a result up to 100ms.

Moreover, when you `@-mention` files, Cody will track the number of characters in those files against the context window limit of the selected chat model. As you `@-mention` multiple files, Cody will calculate how many tokens of the context window remain. When the remaining context window size becomes too small, you get **File too large** errors for further more `@-mention` files.
Moreover, when you `@-mention` files, Cody will track the number of characters in those files against the context window limit of the selected chat model. As you `@-mention` multiple files, Cody will calculate how many tokens of the context window remain. When the remaining context window size becomes too small, you'll receive **File too large** errors when attempting to `@-mention` additional files.

Cody defaults to showing @-mention context chips for all the context it intends to use. When you open a new chat, Cody will show context chips for your current repository and current file (or file selection if you have code highlighted).

Expand All @@ -178,7 +178,7 @@ When you have both a repository and files @-mentioned, Cody will search the repo

### @-mention context providers with OpenCtx

<Callout type="info">OpenCtx context providers is in Experimental stage for all Cody users. Enterprise users can also use this but with limited support. If you have feedback or questions, please visit our [support forum](https://community.sourcegraph.com/c/openctx/10).</Callout>
<Callout type="info">OpenCtx context providers are in Experimental stage for all Cody users. Enterprise users can also use this but with limited support. If you have feedback or questions, please visit our [support forum](https://community.sourcegraph.com/c/openctx/10).</Callout>

[OpenCtx](https://openctx.org/) is an open standard for bringing contextual info about code into your dev tools. Cody Free and Pro users can use OpenCtx providers to fetch and use context from the following sources:

Expand Down Expand Up @@ -209,6 +209,8 @@ If Cody's answer isn't helpful, you can try asking again with different context:
- Current file only: Re-run the prompt again using just the current file as context.
- Add context: Provides @-mention context options to improve the response by explicitly including files, symbols, remote repositories, or even web pages (by URL).

![re-run-with-context](https://storage.googleapis.com/sourcegraph-assets/Docs/re-run-with-context.png)

## Context fetching mechanism

VS Code users on the Free or Pro plan use [local context](/cody/core-concepts/context#context-selection).
Expand Down Expand Up @@ -265,9 +267,11 @@ For customization and advanced use cases, you can create **Custom Commands** tai

Cody lets you dynamically insert code from chat into your files with **Smart Apply**. Every time Cody provides you with a code suggestion, you can click the **Apply** button. Cody will then analyze your open code file, find where that relevant code should live, and add a diff. For chat messages where Cody provides multiple code suggestions, you can apply each in sequence to go from chat suggestions to written code.

Smart Apply also supports the executing of commands in the terminal. When you ask Cody a question related to terminal commands, you can now execute the suggestion in your terminal by clicking the `Execute` button in the chat window.
![smart-apply-code](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-102024.png)

Smart Apply also supports executing commands in the terminal. When you ask Cody a question related to terminal commands, you can execute the suggestion in your terminal by clicking the `Execute` button in the chat window.

![smart-apply](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-102024.png)
![smart-apply-execute](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-102024.png)

## Keyboard shortcuts

Expand All @@ -293,9 +297,9 @@ Cody also works with Cursor, Gitpod, IDX, and other similar VS Code forks. To ac

## Supported LLM models

Claude Sonnet 3.5 is the default LLM model for inline edits and commands. If you've used Claude 3 Sonnet for inline edit or commands before, remember to manually update the model. The default model change only affects new users.
Claude 3.5 Sonnet is the default LLM model for inline edits and prompts. If you've used a different or older LLM model for inline edits or commands before, remember to manually change your model to Claude 3.5 Sonnet. Default model changes only affect new users.

Users on Cody **Free** and **Pro** can choose from a list of supported LLM models for Chat and Commands.
Users on Cody **Free** and **Pro** can choose from a list of [supported LLM models](/cody/capabilities/supported-models) for chat.

![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/llm-dropdown-options-102024.png)

Expand All @@ -316,21 +320,18 @@ You also get additional capabilities like BYOLLM (Bring Your Own LLM), supportin
To get autocomplete suggestions from Ollama locally, follow these steps:

- Install and run [Ollama](https://ollama.ai/)
- Download one of the supported local models:
- `ollama pull deepseek-coder:6.7b-base-q4_K_M` for [deepseek-coder](https://ollama.ai/library/deepseek-coder)
- `ollama pull codellama:7b-code` for [codellama](https://ollama.ai/library/codellama)
- `ollama pull starcoder2:7b` for [codellama](https://ollama.ai/library/starcoder2)
- Download one of the supported local models using `pull`. The `pull` command is used to download models from the Ollama library to your local machine.
- `ollama pull deepseek-coder-v2` for [deepseek-coder](https://ollama.com/library/deepseek-coder-v2)
- `ollama pull codellama:13b` for [codellama](https://ollama.ai/library/codellama)
- `ollama pull starcoder2:7b` for [starcoder2](https://ollama.ai/library/starcoder2)
- Update Cody's VS Code settings to use the `experimental-ollama` autocomplete provider and configure the right model:

```json

{
"cody.autocomplete.advanced.provider": "experimental-ollama",
"cody.autocomplete.experimental.ollamaOptions": {
"url": "http://localhost:11434",
"model": "deepseek-coder:6.7b-base-q4_K_M"
}
}
"cody.autocomplete.advanced.provider": "experimental-ollama",
"cody.autocomplete.experimental.ollamaOptions": {
"url": "http://localhost:11434",
"model": "deepseek-coder-v2"
}
```

- Confirm Cody uses Ollama by looking at the Cody output channel or the autocomplete trace view (in the command palette)
Expand All @@ -348,7 +349,7 @@ To generate chat and commands with Ollama locally, follow these steps:
- Download [Ollama](https://ollama.com/download)
- Start Ollama (make sure the Ollama logo is showing up in your menu bar)
- Select a chat model (model that includes instruct or chat, for example, [gemma:7b-instruct-q4_K_M](https://ollama.com/library/gemma:7b-instruct-q4_K_M)) from the [Ollama Library](https://ollama.com/library)
- Pull the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
- Pull (download) the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
- Once the chat model is downloaded successfully, open Cody in VS Code
- Open a new Cody chat
- In the new chat panel, you should see the chat model you've pulled in the dropdown list
Expand Down
Loading