Skip to content

Commit 408cbdd

Browse files
committed
Faster Smart Apply updates
1 parent 67196e4 commit 408cbdd

File tree

2 files changed

+34
-20
lines changed

2 files changed

+34
-20
lines changed

docs/cody/capabilities/chat.mdx

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,12 @@ Smart Apply also supports the executing of commands in the terminal. When you as
7474

7575
![smart-apply](https://storage.googleapis.com/sourcegraph-assets/Docs/smart-apply-2025.png)
7676

77+
### Model used for Smart Apply
78+
79+
To ensure low latency, Cody uses a more targeted Qwen 2.5 Coder model for Smart Apply. This model improves the responsiveness of the Smart Apply feature in both VS Code and JetBrains while preserving edit quality. Users on Cody Free, Pro, Enterprise Starter, and Enterprise plans get this default Qwen 2.5 Coder model for Smart Apply suggestions.
80+
81+
Enterprise users not using Cody Gateway get a Claude Sonnet-based model for Smart Apply.
82+
7783
## Chat history
7884

7985
Cody keeps a history of your chat sessions. You can view it by clicking the **History** button in the chat panel. You can **Export** it to a JSON file for later use or click the **Delete all** button to clear the chat history.

docs/cody/capabilities/supported-models.mdx

Lines changed: 28 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -6,20 +6,20 @@ Cody supports a variety of cutting-edge large language models for use in chat an
66

77
<Callout type="note">Newer versions of Sourcegraph Enterprise, starting from v5.6, it will be even easier to add support for new models and providers, see [Model Configuration](/cody/enterprise/model-configuration) for more information.</Callout>
88

9-
| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
10-
| :------------ | :-------------------------------------------------------------------------------------------------------------------------------------------- | :----------- | :----------- | :------------- | --- | --- | --- | --- |
11-
| OpenAI | [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | - | || | | | |
12-
| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | - | || | | | |
13-
| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ||| | | | |
14-
| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) (experimental) | | || | | | |
15-
| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) (experimental) | - | - || | | | |
16-
| OpenAI | [o1](https://platform.openai.com/docs/models#o1) | - | || | | | |
17-
| Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | | || | | | |
18-
| Anthropic | [Claude 3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | | || | | | |
19-
| Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | || | | | |
20-
| Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | | | ✅ (beta) | | | | |
21-
| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | | | | | | | |
22-
| Google | [Gemini 2.0 Flash-Lite Preview](https://deepmind.google/technologies/gemini/flash/) (experimental) | | | | | | | |
9+
| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
10+
| :----------- | :-------------------------------------------------------------------------------------------------------------------------------------------- | :------- | :------ | :------------- | --- | --- | --- | --- |
11+
| OpenAI | [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | - ||| | | | |
12+
| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | - | || | | | |
13+
| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | | || | | | |
14+
| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) (experimental) |||| | | | |
15+
| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) (experimental) | - | - || | | | |
16+
| OpenAI | [o1](https://platform.openai.com/docs/models#o1) | - | || | | | |
17+
| Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) |||| | | | |
18+
| Anthropic | [Claude 3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) |||| | | | |
19+
| Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - ||| | | | |
20+
| Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) ||| ✅ (beta) | | | | |
21+
| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) ||| | | | | |
22+
| Google | [Gemini 2.0 Flash-Lite Preview](https://deepmind.google/technologies/gemini/flash/) (experimental) | ||| | | | |
2323

2424
<Callout type="note">To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version. </Callout>
2525

@@ -39,13 +39,21 @@ In addition, Sourcegraph Enterprise customers using GCP Vertex (Google Cloud Pla
3939

4040
Cody uses a set of models for autocomplete which are suited for the low latency use case.
4141

42-
| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
43-
| :-------------------- | :---------------------------------------------------------------------------------------- | :------- | :------ | :------------- | --- | --- | --- | --- |
44-
| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) |||| | | | |
45-
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - || | | | |
46-
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - || | | | |
47-
| | | | | | | | | |
42+
| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
43+
| :----------- | :---------------------------------------------------------------------------------------- | :------- | :------ | :------------- | --- | --- | --- | --- |
44+
| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) |||| | | | |
45+
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - || | | | |
46+
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - || | | | |
47+
| | | | | | | | | |
4848

4949
<Callout type="note">The default autocomplete model for Cody Free, Pro and Enterprise users is DeepSeek-Coder-V2.</Callout>
5050

5151
<Callout type="note">The DeepSeek model used by Sourcegraph is hosted by Fireworks.ai, and is hosted as a single-tenant service in a US-based data center. For more information see our [Cody FAQ](https://sourcegraph.com/docs/cody/faq#is-any-of-my-data-sent-to-deepseek).</Callout>
52+
53+
## Smart Apply
54+
55+
| **Model** | **Free** | **Pro** | **Enterprise** | | | | | |
56+
| :--------------- | :------- | :------ | :------------- | :--- | --- | --- | --- | --- |
57+
| Qwen 2.5 Coder |||| | | | | |
58+
59+
<Callout type="note">Enterprise users not using Cody Gateway get a Claude Sonnet-based model for Smart Apply.</Callout>

0 commit comments

Comments
 (0)