Skip to content

Commit fb691b0

Browse files
Docs working branch for the Sourcegraph v6.1 release (#971)
Docs for v6.1 - Autocomplete for Visual Studio - Updated UI for @-mention prompts - Deprecating old LLM modes - Updating outdated images --------- Co-authored-by: Kalan <[email protected]>
1 parent 8658ac9 commit fb691b0

23 files changed

+112
-316
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Sourcegraph Docs
22

3-
<!-- Working branch for JAN 2025 Release -->
3+
<!-- Working branch for FEB 2025 Release -->
44

55
Welcome to the Sourcegraph documentation! We're excited to have you contribute to our docs. We've recently rearchitectured our docs tech stack — powered by Next.js, TailwindCSS and deployed on Vercel. This guide will walk you through the process of contributing to our documentation using the new tech stack.
66

docs.config.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
const config = {
2-
DOCS_LATEST_VERSION: '6.0'
2+
DOCS_LATEST_VERSION: '6.1'
33
};
44

55
module.exports = config;

docs/cody/capabilities/chat.mdx

Lines changed: 6 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Cody chat can run offline with Ollama. The offline mode does not require you to
6666

6767
![offline-cody-with-ollama](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-offline-ollama.jpg)
6868

69-
You can still switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
69+
You can still switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, etc.
7070

7171
## LLM selection
7272

@@ -123,29 +123,6 @@ To use Cody's chat, you'll need the following:
123123

124124
The enhanced chat experience includes everything in the Free plan, plus the following:
125125

126-
## Intent detection
127-
128-
Intent detection automatically analyzes user queries and determines whether to provide an AI chat or code search response. This functionality helps simplify developer workflows by providing the most appropriate type of response without requiring explicit mode switching.
129-
130-
### How it works
131-
132-
When a user submits a query in the chat panel, the intent detection component:
133-
134-
- Analyzes the query content and structure
135-
- Determines the most appropriate response type (search or chat)
136-
- Returns results in the optimal format
137-
- Provides the ability to toggle between response types manually
138-
139-
Let's look at an example of how this might work:
140-
141-
#### Search-based response
142-
143-
![Intent detection code search response](https://storage.googleapis.com/sourcegraph-assets/Docs/intent-detection-code-search-response-01242025.jpg)
144-
145-
#### Chat-based response
146-
147-
![Intent detection chat response](https://storage.googleapis.com/sourcegraph-assets/Docs/intent-detection-chat-response-01242025.jpg)
148-
149126
## Smart search integration
150127

151128
The smart search integration enhances Sourcegraph's chat experience by providing lightweight code search capabilities directly within the chat interface. This feature simplifies developer workflows by offering quick access to code search without leaving the chat environment.
@@ -181,15 +158,16 @@ Search results generated through smart search integration can be automatically u
181158
The following is a general walkthrough of the chat experience:
182159

183160
1. The user enters a query in the chat interface
184-
2. The system analyzes the query through intent detection
185-
3. If it's a search query:
161+
2. By default a user gets a chat response for the query
162+
3. To get integrated search results, toggle to **Run as search** from the drop-down selector or alternatively use `Cmd+Opt+Enter` (macOS)
163+
4. For search:
186164
- Displays ranked results with code snippets
187165
- Shows personalized repository ordering
188166
- Provides checkboxes to select context for follow-ups
189-
4. If it's a chat query:
167+
5. For chat:
190168
- Delivers AI-powered responses
191169
- Can incorporate previous search results as context
192-
5. Users can:
170+
6. Users can:
193171
- Switch between search and chat modes
194172
- Click on results to open files in their editor
195173
- Ask follow-up questions using selected context

docs/cody/capabilities/supported-models.mdx

Lines changed: 9 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -8,25 +8,20 @@ Cody supports a variety of cutting-edge large language models for use in chat an
88

99
| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
1010
| :------------ | :-------------------------------------------------------------------------------------------------------------------------------------------- | :----------- | :----------- | :------------- | --- | --- | --- | --- |
11-
| OpenAI | [gpt-3.5 turbo](https://platform.openai.com/docs/models/gpt-3-5-turbo) |||| | | | |
12-
| OpenAI | [gpt-4](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=to%20Apr%202023-,gpt%2D4,-Currently%20points%20to) | - | - || | | | |
1311
| OpenAI | [gpt-4 turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | - ||| | | | |
14-
| OpenAI | [gpt-4o](https://platform.openai.com/docs/models/gpt-4o) | - ||| | | | |
15-
| Anthropic | [claude-3 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) |||| | | | |
12+
| OpenAI | [gpt-4o](https://platform.openai.com/docs/models#gpt-4o) | - ||| | | | |
13+
| OpenAI | [gpt-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) |||| | | | |
14+
| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) (experimental) |||| | | | |
15+
| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) (experimental) | - | - || | | | |
16+
| OpenAI | [o1](https://platform.openai.com/docs/models#o1) | - ||| | | | |
1617
| Anthropic | [claude-3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) |||| | | | |
17-
| Anthropic | [claude-3 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) |||| | | | |
1818
| Anthropic | [claude-3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) |||| | | | |
19-
| Anthropic | [claude-3.5 Sonnet (New)](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) |||| | | | |
20-
| Anthropic | [claude-3 Opus](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - ||| | | | |
21-
| Mistral | [mixtral 8x7b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) ||| - | | | | |
22-
| Mistral | [mixtral 8x22b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) ||| - | | | | |
2319
| Ollama | [variety](https://ollama.com/) | experimental | experimental | - | | | | |
24-
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) ||| ✅ (Beta) | | | | |
25-
| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) ||| ✅ (Beta) | | | | |
26-
| Google Gemini | [2.0 Flash Experimental](https://deepmind.google/technologies/gemini/flash/) |||| | | | |
27-
| | | | | | | | | |
20+
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) ||| ✅ (beta) | | | | |
21+
| Google Gemini | [2.0 Flash](https://deepmind.google/technologies/gemini/flash/) |||| | | | |
22+
| Google Gemini | [2.0 Flash-Lite Preview](https://deepmind.google/technologies/gemini/flash/) (experimental) |||| | | | |
2823

29-
<Callout type="note">To use Claude 3 (Opus and Sonnets) models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.</Callout>
24+
<Callout type="note">To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.</Callout>
3025

3126
## Autocomplete
3227

@@ -37,7 +32,6 @@ Cody uses a set of models for autocomplete which are suited for the low latency
3732
| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) |||| | | | |
3833
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - || | | | |
3934
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - || | | | |
40-
| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - || | | | |
4135
| Ollama (Experimental) | [variety](https://ollama.com/) ||| - | | | | |
4236
| | | | | | | | | |
4337

docs/cody/clients/feature-reference.mdx

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
| **Feature** | **VS Code** | **JetBrains** | **Visual Studio** | **Eclipse** | **Web** | **CLI** |
88
| ---------------------------------------- | ----------- | ------------- | ----------------- | ----------- | -------------------- | ------- |
9-
| Chat |||||||
9+
| Chat |||||||
1010
| Chat history |||||||
1111
| Clear chat history |||||||
1212
| Edit sent messages |||||||
@@ -27,12 +27,13 @@
2727

2828
## Code Autocomplete
2929

30-
| **Feature** | **VS Code** | **JetBrains** |
31-
| --------------------------------------------- | ----------- | ------------- |
32-
| Single and multi-line autocompletion |||
33-
| Cycle through multiple completion suggestions |||
34-
| Accept suggestions word-by-word |||
35-
| Ollama support (experimental) |||
30+
| **Feature** | **VS Code** | **JetBrains** | **Visual Studio** |
31+
| --------------------------------------------- | ----------- | ------------- | ----------------- |
32+
| Single and multi-line autocompletion ||||
33+
| Cycle through multiple completion suggestions ||||
34+
| Accept suggestions word-by-word ||||
35+
| Ollama support (experimental) ||||
36+
3637

3738
Few exceptions that apply to Cody Pro and Cody Enterprise users:
3839

docs/cody/clients/install-eclipse.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva
5050

5151
## LLM selection
5252

53-
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
53+
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.
5454

5555
Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, simply download it and run it in Ollama.
5656

docs/cody/clients/install-visual-studio.mdx

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva
4343

4444
## LLM selection
4545

46-
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
46+
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.
4747

4848
Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, download it and run it in Ollama.
4949

@@ -78,3 +78,13 @@ To help you get started, there are a few prompts that are available by default.
7878
- Generate unit tests
7979

8080
![cody-vs-prompts](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-vs-prompts-102024-2.png)
81+
82+
## Autocomplete
83+
84+
Cody for Visual Studio supports single and multi-line autocompletions. The autocomplete feature is available for the extension `v0.2.0` and above. It's enabled by default, with settings to turn it off.
85+
86+
<video width="1920" height="1080" loop playsInline controls style={{ width: '100%', height: 'auto' }}>
87+
<source src="https://storage.googleapis.com/sourcegraph-assets/Docs/visual-studio-autocomplete.mp4" type="video/mp4"/>
88+
</video>
89+
90+
Advanced features like [auto-edit](/cody/capabilities/auto-edit) are not yet supported. To disable the autocomplete feature, you can do it from your Cody settings section.

docs/cody/clients/install-vscode.mdx

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ For Edit:
136136

137137
- On any file, select some code and a right-click
138138
- Select Cody->Edit Code (optionally, you can do this with Opt+K/Alt+K)
139-
- Select the default model available (this is Claude 3 Opus)
139+
- Select the default model available
140140
- See the selection of models and click the model you desire. This model will now be the default model going forward on any new edits
141141

142142
### Selecting Context with @-mentions
@@ -271,13 +271,13 @@ Claude 3.5 Sonnet is the default LLM model for inline edits and prompts. If you'
271271

272272
Users on Cody **Free** and **Pro** can choose from a list of [supported LLM models](/cody/capabilities/supported-models) for chat.
273273

274-
![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/llm-dropdown-options-2025.png)
274+
![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/llm-dropdown-options-0225.jpg)
275275

276-
Enterprise users get Claude 3 (Opus and Sonnet) as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.
276+
Enterprise users get Claude 3.5 Sonnet as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.
277277

278278
<Callout type="info">For enterprise users on Amazon Bedrock: 3.5 Sonnet is unavailable in `us-west-2` but available in `us-east-1`. Check the current model availability on AWS and your customer's instance location before switching. Provisioned throughput via AWS is not supported for 3.5 Sonnet.</Callout>
279279

280-
You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) Anthropic (like Claude 2.0 and Claude Instant), OpenAI (GPT 3.5 and GPT 4) and Google Gemini 1.5 models (Flash and Pro).
280+
You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) from Anthropic, OpenAI, and Google Gemini.
281281

282282
<Callout type="note">Read more about all the supported LLM models [here](/cody/capabilities/supported-models)</Callout>
283283

@@ -333,7 +333,7 @@ You can use Cody with or without an internet connection. The offline mode does n
333333

334334
![offline-cody-with-ollama](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-offline-ollama.jpg)
335335

336-
You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
336+
You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, etc.
337337

338338
## Experimental models
339339

0 commit comments

Comments
 (0)