You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: This rule applies to all documentation files to ensure consistent
4
+
SEO optimization and improve discoverability. It helps users and search
5
+
engines understand the content of each page before reading it.
6
+
---
7
+
8
+
Every file in the docs folder must include a 'description' field in its frontmatter that accurately summarizes the content of the page in 100-160 characters. The description should be concise, keyword-rich, and explain what users will learn or accomplish from the page.
Copy file name to clipboardExpand all lines: docs/autocomplete/how-to-use-it.mdx
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,15 @@
1
1
---
2
2
title: "Autocomplete"
3
-
sidebarTitle: "How To Use It"
3
+
sidebarTitle: "How To Use AI Autocomplete"
4
4
icon: "circle-question"
5
+
description: "Learn how to use Continue's AI-powered code autocomplete feature with keyboard shortcuts for accepting, rejecting, or partially accepting inline suggestions as you type"
Autocomplete provides inline code suggestions as you type. To enable it, simply click the "Continue" button in the status bar at the bottom right of your IDE or ensure the "Enable Tab Autocomplete" option is checked in your IDE settings.
description: "Learn how to use Continue's AI chat assistant to solve coding problems without leaving your IDE, including code context sharing, applying generated solutions, and switching between models"
Chat makes it easy to ask for help from an LLM without needing to leave the IDE. You send it a task, including any relevant information, and it replies with the text / code most likely to complete the task. If it does not give you what you want, then you can send follow up messages to clarify and adjust its approach until the task is completed.
14
15
15
16
Chat is best used to understand and iterate on code or as a replacement for search engine queries.
16
17
17
-
## Type a request and press enter
18
+
## Send a Coding Question or Task to AI Chat
18
19
19
-
You send it a question, and it replies with an answer. You tell it to solve a problem, and it provides you a solution. You ask for some code, and it generates it.
20
+
To send a question, add it to the input box in the extention and press enter. You send it a question, and it replies with an answer. You tell it to solve a problem, and it provides you a solution. You ask for some code, and it generates it.
20
21
21
-
## Highlight a code section to include as context
22
+
## Add Code Context to AI Chat by Highlighting Code
22
23
23
24
You select a code section with your mouse, press `cmd/ctrl` + `L` (VS Code) or `cmd/ctrl` + `J` (JetBrains) to send it to the LLM, and then ask for it to be explained to you or request it to be refactored in some way.
24
25
25
-
## Reference context with the @ symbol
26
+
## Use @ to Include Project Context in AI Chat Responses
26
27
27
-
If there is information from the codebase, documentation, IDE, or other tools that you want to include as context, you can type @ to select and include it as context. You can learn more about how to use this in [Chat context selection](/features/chat/quick-start#3-use--for-additional-context).
28
+
If there is information from the codebase, documentation, IDE, or other tools that you want to include as context, you can type @ to select and include it as context. You can learn more about how to use this in [Chat context selection](/features/chat/quick-start#how-to-use-%40-for-additional-context).
28
29
29
-
## Apply generated code to your file
30
+
## Insert AI-Generated Code Changes Directly into Your File
30
31
31
32
When the LLM replies with edits to a file, you can click the “Apply” button. This will update the existing code in the editor to reflect the suggested changes.
32
33
33
-
## Start a fresh session for a new task
34
+
## How to Begin a New AI Chat Session for a Different Coding Task
34
35
35
36
Once you complete a task and want to start a new one, press `cmd/ctrl` + `L` (VS Code) or `cmd/ctrl` + `J` (JetBrains) to begin a new session, ensuring only relevant context for the next task is provided to the LLM.
36
37
37
-
## Switch between different models
38
+
## Change AI Models in Continue Chat for Different Coding Needs
38
39
39
40
If you have configured multiple models, you can switch between models using the dropdown or by pressing `cmd/ctrl` + `’`
description: "Model Context Protocol servers provide specialized functionality:"
3
+
description: "Learn how to use Model Context Protocol (MCP) blocks in Continue to integrate external tools, connect databases, and extend your development environment."
4
4
---
5
5
6
+
Model Context Protocol (MCP) blocks let Continue connect to external tools, systems, and databases by running MCP servers.
7
+
8
+
These blocks make it possible to:
9
+
6
10
-**Enable integration** with external tools and systems
7
11
-**Create extensible interfaces** for custom capabilities
8
12
-**Support complex interactions** with your development environment
Continue supports [many model providers](/customization/models#openai), including Anthropic, OpenAI, Gemini, Ollama, Amazon Bedrock, Azure, xAI, DeepSeek, and more. Models can have various roles like `chat`, `edit`, `apply`, `autocomplete`, `embed`, and `rerank`.
17
+
Continue supports [many model providers](/customize/model-providers/top-level/openai), including Anthropic, OpenAI, Gemini, Ollama, Amazon Bedrock, Azure, xAI, DeepSeek, and more. Models can have various roles like `chat`, `edit`, `apply`, `autocomplete`, `embed`, and `rerank`.
18
18
19
19
Read more about roles [here](/customize/model-roles) and view [`models`](/reference#models) in the YAML Reference.
description: "Continue can be deeply customized. For example you might:"
2
+
title: "Customization Overview"
3
+
description: "Learn how to customize Continue with model providers, context providers, slash commands, and tools to create your perfect AI coding assistant"
4
4
---
5
5
6
-
-**Change your Model Provider**. Continue allows you to choose your favorite or even add multiple model providers. This allows you to use different models for different tasks, or to try another model if you're not happy with the results from your current model. Continue supports all of the popular model providers, including OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can even self host your own model provider if you'd like. Learn more about [model providers](/customize/model-providers/top-level/openai).
7
-
-**Select different models for specific tasks**. Different Continue features can use different models. We call these _model roles_. For example, you can use a different model for chat than you do for autocomplete. Learn more about [model roles](/customize/model-roles).
8
-
-**Add a Context Provider**. Context providers allow you to add information to your prompts, giving your LLM additional context to work with. Context providers allow you to reference snippets from your codebase, or lookup relevant documentation, or use a search engine to find information and much more. Learn more about [context providers](/customize/custom-providers).
9
-
-**Create a Slash Command**. Slash commands allow you to easily add custom functionality to Continue. You can use a slash command that allows you to generate a shell command from natural language, or perhaps generate a commit message, or create your own custom command to do whatever you want. Learn more about [slash commands](/customize/deep-dives/slash-commands).
10
-
-**Call external tools and functions**. Unchain your LLM with the power of tools using [Agent](/features/agent/quick-start). Add custom tools using [MCP Servers](/customization/mcp-tools)
6
+
Continue can be deeply customized to fit your specific development workflow and preferences. This guide covers the main ways you can customize Continue to enhance your coding experience.
7
+
8
+
## Change Your Model Provider
9
+
10
+
Continue allows you to choose your favorite or even add multiple model providers. This allows you to use different models for different tasks, or to try another model if you're not happy with the results from your current model. Continue supports all of the popular model providers, including OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can even self host your own model provider if you'd like. Learn more about [model providers](/customize/model-providers/top-level/openai).
11
+
12
+
## Select Different Models for Specific Tasks
13
+
14
+
Different Continue features can use different models. We call these _model roles_. For example, you can use a different model for chat than you do for autocomplete. Learn more about [model roles](/customize/model-roles).
15
+
16
+
## Add a Context Provider
17
+
18
+
Context providers allow you to add information to your prompts, giving your LLM additional context to work with. Context providers allow you to reference snippets from your codebase, or lookup relevant documentation, or use a search engine to find information and much more. Learn more about [context providers](/customize/custom-providers).
19
+
20
+
## Create a Slash Command
21
+
22
+
Slash commands allow you to easily add custom functionality to Continue. You can use a slash command that allows you to generate a shell command from natural language, or perhaps generate a commit message, or create your own custom command to do whatever you want. Learn more about [slash commands](/customize/deep-dives/slash-commands).
23
+
24
+
## Call External Tools and Functions
25
+
26
+
Unchain your LLM with the power of tools using [Agent](/features/agent/quick-start). Add custom tools using [MCP Servers](/customization/mcp-tools)
11
27
12
28
Whatever you choose, you'll probably start by editing your Assistant.
13
29
14
-
## Editing your assistant
30
+
## Edit Your Assistant
15
31
16
32
You can easily access your assistant configuration from the Continue Chat sidebar. Open the sidebar by pressing `cmd/ctrl` + `L` (VS Code) or `cmd/ctrl` + `J` (JetBrains) and click the Assistant selector above the main chat input. Then, you can hover over an assistant and click the `new window` (hub assistants) or `gear` (local assistants) icon.
17
33
18
34

19
35
36
+
## Manage Your Assistant
37
+
20
38
- See [Editing Hub Assistants](/hub/assistants/edit-an-assistant) for more details on managing your hub assistant
21
39
- See the [Config Deep Dive](/reference) for more details on configuring local assistants.
Copy file name to clipboardExpand all lines: docs/customization/rules.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ Think of these as the guardrails for your AI coding assistants:
11
11
12
12
By implementing rules, you transform the AI from a generic coding assistant into a knowledgeable team member that understands your project's unique requirements and constraints.
13
13
14
-
###How Rules Work
14
+
## How Rules Work
15
15
16
16
Your assistant detects rule blocks and applies the specified rules while in [Agent](/features/agent/quick-start), [Chat](/features/chat/quick-start), and [Edit](/features/edit/quick-start) modes.
Continue indexes your codebase so that it can later automatically pull in the most relevant context from throughout your workspace. This is done via a combination of embeddings-based retrieval and keyword search. By default, all embeddings are calculated locally using `transformers.js` and stored locally in `~/.continue/index`.
8
9
9
10
<Info>
10
11
**Note:**`transformers.js` cannot be used in JetBrains IDEs. However, you can
11
12
select a different embeddings model from [the list
12
-
here](../model-roles/embeddings.mdx).
13
+
here](../model-roles/embeddings).
13
14
</Info>
14
15
16
+
## How to Use @Codebase and @Folder Context Providers
17
+
15
18
Currently, the codebase retrieval feature is available as the "codebase" and "folder" context providers. You can use them by typing `@Codebase` or `@Folder` in the input box, and then asking a question. The contents of the input box will be compared with the embeddings from the rest of the codebase (or folder) to determine relevant files.
16
19
20
+
### When @Codebase Context Provider Is Useful
21
+
17
22
Here are some common use cases where it can be useful:
18
23
19
-
- Asking high-level questions about your codebase
24
+
-**Asking high-level questions about your codebase**
20
25
- "How do I add a new endpoint to the server?"
21
26
- "Do we use VS Code's CodeLens feature anywhere?"
22
27
- "Is there any code written already to convert HTML to markdown?"
23
-
- Generate code using existing samples as reference
28
+
-**Generate code using existing samples as reference**
24
29
- "Generate a new React component with a date picker, using the same patterns as existing components"
25
30
- "Write a draft of a CLI application for this project using Python's argparse"
26
31
- "Implement the `foo` method in the `bar` class, following the patterns seen in other subclasses of `baz`.
27
-
- Use `@Folder` to ask questions about a specific folder, increasing the likelihood of relevant results
32
+
-**Use `@Folder` to ask questions about a specific folder, increasing the likelihood of relevant results**
28
33
- "What is the main purpose of this folder?"
29
34
- "How do we use VS Code's CodeLens API?"
30
35
- Or any of the above examples, but with `@Folder` instead of `@Codebase`
31
36
37
+
### When @Codebase Context Provider Is Not Useful
38
+
32
39
Here are use cases where it is not useful:
33
40
34
-
- When you need the LLM to see _literally every_ file in your codebase
41
+
-**When you need the LLM to see _literally every_ file in your codebase**
35
42
- "Find everywhere where the `foo` function is called"
36
43
- "Review our codebase and find any spelling mistakes"
37
-
- Refactoring
44
+
-**Refactoring tasks**
38
45
- "Add a new parameter to the `bar` function and update usages"
39
46
40
-
## Configuration
47
+
## How to Configure @Codebase Context Provider Settings
41
48
42
49
There are a few options that let you configure the behavior of the `@codebase` context provider, which are the same for the `@folder` context provider:
43
50
@@ -82,7 +89,7 @@ Final number of results to use after re-ranking (default: 5)
82
89
83
90
Whether to use re-ranking, which will allow initial selection of `nRetrieve` results, then will use an LLM to select the top `nFinal` results (default: true)
84
91
85
-
## Ignore files during indexing
92
+
## How to Ignore Files During Indexing
86
93
87
94
Continue respects `.gitignore` files in order to determine which files should not be indexed. If you'd like to exclude additional files, you can add them to a `.continueignore` file, which follows the exact same rules as `.gitignore`.
88
95
@@ -92,6 +99,6 @@ If you want to see exactly what files Continue has indexed, the metadata is stor
92
99
93
100
If you need to force a refresh of the index, reload the VS Code window with <kbd>cmd/ctrl</kbd> + <kbd>shift</kbd> + <kbd>p</kbd> + "Reload Window".
94
101
95
-
## Repository map
102
+
## How Repository Map Enhances Codebase Understanding
96
103
97
-
Models in the Claude 3, Llama 3.1/3.2, Gemini 1.5, and GPT-4o families will automatically use a [repository map](../custom-providers#repository-map) during codebase retrieval, which allows the model to understand the structure of your codebase and use it to answer questions. Currently, the repository map only contains the filepaths in the codebase.
104
+
Models in the Claude 3, Llama 3.1/3.2, Gemini 1.5, and GPT-4o families will automatically use a [repository map](/customize/context/codebase#repository-map) during codebase retrieval, which allows the model to understand the structure of your codebase and use it to answer questions. Currently, the repository map only contains the filepaths in the codebase.
0 commit comments