Skip to content
Merged
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Sourcegraph Docs

<!-- Working branch for JAN 2025 Release -->
<!-- Working branch for FEB 2025 Release -->

Welcome to the Sourcegraph documentation! We're excited to have you contribute to our docs. We've recently rearchitectured our docs tech stack — powered by Next.js, TailwindCSS and deployed on Vercel. This guide will walk you through the process of contributing to our documentation using the new tech stack.

Expand Down
2 changes: 1 addition & 1 deletion docs.config.js
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
const config = {
DOCS_LATEST_VERSION: '6.0'
DOCS_LATEST_VERSION: '6.1'
};

module.exports = config;
34 changes: 6 additions & 28 deletions docs/cody/capabilities/chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Cody chat can run offline with Ollama. The offline mode does not require you to

![offline-cody-with-ollama](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-offline-ollama.jpg)

You can still switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
You can still switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, etc.

## LLM selection

Expand Down Expand Up @@ -123,29 +123,6 @@ To use Cody's chat, you'll need the following:

The enhanced chat experience includes everything in the Free plan, plus the following:

## Intent detection

Intent detection automatically analyzes user queries and determines whether to provide an AI chat or code search response. This functionality helps simplify developer workflows by providing the most appropriate type of response without requiring explicit mode switching.

### How it works

When a user submits a query in the chat panel, the intent detection component:

- Analyzes the query content and structure
- Determines the most appropriate response type (search or chat)
- Returns results in the optimal format
- Provides the ability to toggle between response types manually

Let's look at an example of how this might work:

#### Search-based response

![Intent detection code search response](https://storage.googleapis.com/sourcegraph-assets/Docs/intent-detection-code-search-response-01242025.jpg)

#### Chat-based response

![Intent detection chat response](https://storage.googleapis.com/sourcegraph-assets/Docs/intent-detection-chat-response-01242025.jpg)

## Smart search integration

The smart search integration enhances Sourcegraph's chat experience by providing lightweight code search capabilities directly within the chat interface. This feature simplifies developer workflows by offering quick access to code search without leaving the chat environment.
Expand Down Expand Up @@ -181,15 +158,16 @@ Search results generated through smart search integration can be automatically u
The following is a general walkthrough of the chat experience:

1. The user enters a query in the chat interface
2. The system analyzes the query through intent detection
3. If it's a search query:
2. By default a user gets a chat response for the query
3. To get integrated search results, toggle to **Run as search** from the drop-down selector or alternatively use `Cmd+Opt+Enter` (macOS)
4. For search:
- Displays ranked results with code snippets
- Shows personalized repository ordering
- Provides checkboxes to select context for follow-ups
4. If it's a chat query:
5. For chat:
- Delivers AI-powered responses
- Can incorporate previous search results as context
5. Users can:
6. Users can:
- Switch between search and chat modes
- Click on results to open files in their editor
- Ask follow-up questions using selected context
Expand Down
24 changes: 9 additions & 15 deletions docs/cody/capabilities/supported-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,20 @@ Cody supports a variety of cutting-edge large language models for use in chat an

| **Provider** | **Model** | **Free** | **Pro** | **Enterprise** | | | | |
| :------------ | :-------------------------------------------------------------------------------------------------------------------------------------------- | :----------- | :----------- | :------------- | --- | --- | --- | --- |
| OpenAI | [gpt-3.5 turbo](https://platform.openai.com/docs/models/gpt-3-5-turbo) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [gpt-4](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=to%20Apr%202023-,gpt%2D4,-Currently%20points%20to) | - | - | ✅ | | | | |
| OpenAI | [gpt-4 turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | - | ✅ | ✅ | | | | |
| OpenAI | [gpt-4o](https://platform.openai.com/docs/models/gpt-4o) | - | ✅ | ✅ | | | | |
| Anthropic | [claude-3 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [gpt-4o](https://platform.openai.com/docs/models#gpt-4o) | - | ✅ | ✅ | | | | |
| OpenAI | [gpt-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) (experimental) | ✅ | ✅ | ✅ | | | | |
| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) (experimental) | - | - | ✅ | | | | |
| OpenAI | [o1](https://platform.openai.com/docs/models#o1) | - | ✅ | ✅ | | | | |
| Anthropic | [claude-3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3.5 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3.5 Sonnet (New)](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ | ✅ | | | | |
| Anthropic | [claude-3 Opus](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | ✅ | ✅ | | | | |
| Mistral | [mixtral 8x7b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
| Mistral | [mixtral 8x22b](https://mistral.ai/technology/#models:~:text=of%20use%20cases.-,Mixtral%208x7B,-Currently%20the%20best) | ✅ | ✅ | - | | | | |
| Ollama | [variety](https://ollama.com/) | experimental | experimental | - | | | | |
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (Beta) | | | | |
| Google Gemini | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ (Beta) | | | | |
| Google Gemini | [2.0 Flash Experimental](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | |
| | | | | | | | | |
| Google Gemini | [1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ | ✅ | ✅ (beta) | | | | |
| Google Gemini | [2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅ | ✅ | | | | |
| Google Gemini | [2.0 Flash-Lite Preview](https://deepmind.google/technologies/gemini/flash/) (experimental) | ✅ | ✅ | ✅ | | | | |

<Callout type="note">To use Claude 3 (Opus and Sonnets) models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.</Callout>
<Callout type="note">To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.</Callout>

## Autocomplete

Expand All @@ -37,7 +32,6 @@ Cody uses a set of models for autocomplete which are suited for the low latency
| Fireworks.ai | [DeepSeek-Coder-V2](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) | ✅ | ✅ | ✅ | | | | |
| Fireworks.ai | [StarCoder](https://arxiv.org/abs/2305.06161) | - | - | ✅ | | | | |
| Anthropic | [claude Instant](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | - | - | ✅ | | | | |
| Google Gemini (Beta) | [1.5 Flash](https://deepmind.google/technologies/gemini/flash/) | - | - | ✅ | | | | |
| Ollama (Experimental) | [variety](https://ollama.com/) | ✅ | ✅ | - | | | | |
| | | | | | | | | |

Expand Down
15 changes: 8 additions & 7 deletions docs/cody/clients/feature-reference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

| **Feature** | **VS Code** | **JetBrains** | **Visual Studio** | **Eclipse** | **Web** | **CLI** |
| ---------------------------------------- | ----------- | ------------- | ----------------- | ----------- | -------------------- | ------- |
| Chat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Chat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Chat history | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
| Clear chat history | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
| Edit sent messages | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |
Expand All @@ -27,12 +27,13 @@

## Code Autocomplete

| **Feature** | **VS Code** | **JetBrains** |
| --------------------------------------------- | ----------- | ------------- |
| Single and multi-line autocompletion | ✅ | ✅ |
| Cycle through multiple completion suggestions | ✅ | ✅ |
| Accept suggestions word-by-word | ✅ | ❌ |
| Ollama support (experimental) | ✅ | ❌ |
| **Feature** | **VS Code** | **JetBrains** | **Visual Studio** |
| --------------------------------------------- | ----------- | ------------- | ----------------- |
| Single and multi-line autocompletion | ✅ | ✅ | ✅ |
| Cycle through multiple completion suggestions | ✅ | ✅ | ✅ |
| Accept suggestions word-by-word | ✅ | ❌ | ❌ |
| Ollama support (experimental) | ✅ | ❌ | ❌ |


Few exceptions that apply to Cody Pro and Cody Enterprise users:

Expand Down
2 changes: 1 addition & 1 deletion docs/cody/clients/install-eclipse.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva

## LLM selection

Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.

Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, simply download it and run it in Ollama.

Expand Down
12 changes: 11 additions & 1 deletion docs/cody/clients/install-visual-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The chat input field has a default `@-mention` [context chips](#context-retrieva

## LLM selection

Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google, and Mixtral. At the same time, Cody Pro and Enterprise users can access more extended models.
Cody offers a variety of large language models (LLMs) to power your chat experience. Cody Free users can access the latest base models from Anthropic, OpenAI, Google. At the same time, Cody Pro and Enterprise users can access more extended models.

Local models are also available through Ollama to Cody Free and Cody Pro users. To use a model in Cody chat, download it and run it in Ollama.

Expand Down Expand Up @@ -78,3 +78,13 @@ To help you get started, there are a few prompts that are available by default.
- Generate unit tests

![cody-vs-prompts](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-vs-prompts-102024-2.png)

## Autocomplete

Cody for Visual Studio supports single and multi-line autocompletions. The autocomplete feature is available for the extension `v0.2.0` and above. It's enabled by default, with settings to turn it off.

<video width="1920" height="1080" loop playsInline controls style={{ width: '100%', height: 'auto' }}>
<source src="https://storage.googleapis.com/sourcegraph-assets/Docs/visual-studio-autocomplete.mp4" type="video/mp4"/>
</video>

Advanced features like [auto-edit](/cody/capabilities/auto-edit) are not yet supported. To disable the autocomplete feature, you can do it from your Cody settings section.
10 changes: 5 additions & 5 deletions docs/cody/clients/install-vscode.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ For Edit:

- On any file, select some code and a right-click
- Select Cody->Edit Code (optionally, you can do this with Opt+K/Alt+K)
- Select the default model available (this is Claude 3 Opus)
- Select the default model available
- See the selection of models and click the model you desire. This model will now be the default model going forward on any new edits

### Selecting Context with @-mentions
Expand Down Expand Up @@ -271,13 +271,13 @@ Claude 3.5 Sonnet is the default LLM model for inline edits and prompts. If you'

Users on Cody **Free** and **Pro** can choose from a list of [supported LLM models](/cody/capabilities/supported-models) for chat.

![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/llm-dropdown-options-2025.png)
![LLM-models-for-cody-free](https://storage.googleapis.com/sourcegraph-assets/Docs/llm-dropdown-options-0225.jpg)

Enterprise users get Claude 3 (Opus and Sonnet) as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.
Enterprise users get Claude 3.5 Sonnet as the default LLM models without extra cost. Moreover, Enterprise users can use Claude 3.5 models through Cody Gateway, Anthropic BYOK, Amazon Bedrock (limited availability), and GCP Vertex.

<Callout type="info">For enterprise users on Amazon Bedrock: 3.5 Sonnet is unavailable in `us-west-2` but available in `us-east-1`. Check the current model availability on AWS and your customer's instance location before switching. Provisioned throughput via AWS is not supported for 3.5 Sonnet.</Callout>

You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) Anthropic (like Claude 2.0 and Claude Instant), OpenAI (GPT 3.5 and GPT 4) and Google Gemini 1.5 models (Flash and Pro).
You also get additional capabilities like BYOLLM (Bring Your Own LLM), supporting Single-Tenant and Self Hosted setups for flexible coding environments. Your site administrator determines the LLM, and cannot be changed within the editor. However, Cody Enterprise users when using Cody Gateway have the ability to [configure custom models](/cody/core-concepts/cody-gateway#configuring-custom-models) from Anthropic, OpenAI, and Google Gemini.

<Callout type="note">Read more about all the supported LLM models [here](/cody/capabilities/supported-models)</Callout>

Expand Down Expand Up @@ -333,7 +333,7 @@ You can use Cody with or without an internet connection. The offline mode does n

![offline-cody-with-ollama](https://storage.googleapis.com/sourcegraph-assets/Docs/cody-offline-ollama.jpg)

You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, Mixtral, etc.
You still have the option to switch to your Sourcegraph account whenever you want to use Claude, OpenAI, Gemini, etc.

## Experimental models

Expand Down
Loading