Skip to content

Commit 1704df4

Browse files
authored
feat: GEO updates for docs (#7083)
1 parent 355b230 commit 1704df4

File tree

132 files changed

+960
-823
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

132 files changed

+960
-823
lines changed
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
globs: docs/**/*.{md,mdx}
3+
description: This rule applies to all documentation files to ensure consistent
4+
SEO optimization and improve discoverability. It helps users and search
5+
engines understand the content of each page before reading it.
6+
---
7+
8+
Every file in the docs folder must include a 'description' field in its frontmatter that accurately summarizes the content of the page in 100-160 characters. The description should be concise, keyword-rich, and explain what users will learn or accomplish from the page.

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Mintlify Starter Kit
1+
## Mintlify Starter Kit
22

33
Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including
44

docs/autocomplete/how-to-use-it.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,15 @@
11
---
22
title: "Autocomplete"
3-
sidebarTitle: "How To Use It"
3+
sidebarTitle: "How To Use AI Autocomplete"
44
icon: "circle-question"
5+
description: "Learn how to use Continue's AI-powered code autocomplete feature with keyboard shortcuts for accepting, rejecting, or partially accepting inline suggestions as you type"
56
---
67

78
<Frame>
89
<img src="/images/autocomplete-9d4e3f7658d3e65b8e8b20f2de939675.gif" />
910
</Frame>
1011

11-
## How to use it
12+
## How to Use AI Code Autocomplete in Continue
1213

1314
Autocomplete provides inline code suggestions as you type. To enable it, simply click the "Continue" button in the status bar at the bottom right of your IDE or ensure the "Enable Tab Autocomplete" option is checked in your IDE settings.
1415

docs/chat/how-to-use-it.mdx

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,39 +1,40 @@
11
---
22
title: "Chat"
3-
sidebarTitle: "How To Use It"
3+
sidebarTitle: "How To Use AI Chat"
44
icon: "circle-question"
5+
description: "Learn how to use Continue's AI chat assistant to solve coding problems without leaving your IDE, including code context sharing, applying generated solutions, and switching between models"
56
---
67

78
<Frame>
89
<img src="/images/chat-489b68d156be2aafe09ee7cedf233fba.gif" />
910
</Frame>
1011

11-
## How to use it
12+
## How to Use AI Chat in Continue for Coding Help
1213

1314
Chat makes it easy to ask for help from an LLM without needing to leave the IDE. You send it a task, including any relevant information, and it replies with the text / code most likely to complete the task. If it does not give you what you want, then you can send follow up messages to clarify and adjust its approach until the task is completed.
1415

1516
Chat is best used to understand and iterate on code or as a replacement for search engine queries.
1617

17-
## Type a request and press enter
18+
## Send a Coding Question or Task to AI Chat
1819

19-
You send it a question, and it replies with an answer. You tell it to solve a problem, and it provides you a solution. You ask for some code, and it generates it.
20+
To send a question, add it to the input box in the extention and press enter. You send it a question, and it replies with an answer. You tell it to solve a problem, and it provides you a solution. You ask for some code, and it generates it.
2021

21-
## Highlight a code section to include as context
22+
## Add Code Context to AI Chat by Highlighting Code
2223

2324
You select a code section with your mouse, press `cmd/ctrl` + `L` (VS Code) or `cmd/ctrl` + `J` (JetBrains) to send it to the LLM, and then ask for it to be explained to you or request it to be refactored in some way.
2425

25-
## Reference context with the @ symbol
26+
## Use @ to Include Project Context in AI Chat Responses
2627

27-
If there is information from the codebase, documentation, IDE, or other tools that you want to include as context, you can type @ to select and include it as context. You can learn more about how to use this in [Chat context selection](/features/chat/quick-start#3-use--for-additional-context).
28+
If there is information from the codebase, documentation, IDE, or other tools that you want to include as context, you can type @ to select and include it as context. You can learn more about how to use this in [Chat context selection](/features/chat/quick-start#how-to-use-%40-for-additional-context).
2829

29-
## Apply generated code to your file
30+
## Insert AI-Generated Code Changes Directly into Your File
3031

3132
When the LLM replies with edits to a file, you can click the “Apply” button. This will update the existing code in the editor to reflect the suggested changes.
3233

33-
## Start a fresh session for a new task
34+
## How to Begin a New AI Chat Session for a Different Coding Task
3435

3536
Once you complete a task and want to start a new one, press `cmd/ctrl` + `L` (VS Code) or `cmd/ctrl` + `J` (JetBrains) to begin a new session, ensuring only relevant context for the next task is provided to the LLM.
3637

37-
## Switch between different models
38+
## Change AI Models in Continue Chat for Different Coding Needs
3839

3940
If you have configured multiple models, you can switch between models using the dropdown or by pressing `cmd/ctrl` + ``

docs/customization/mcp-tools.mdx

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,12 @@
11
---
22
title: "MCP Blocks"
3-
description: "Model Context Protocol servers provide specialized functionality:"
3+
description: "Learn how to use Model Context Protocol (MCP) blocks in Continue to integrate external tools, connect databases, and extend your development environment."
44
---
55

6+
Model Context Protocol (MCP) blocks let Continue connect to external tools, systems, and databases by running MCP servers.
7+
8+
These blocks make it possible to:
9+
610
- **Enable integration** with external tools and systems
711
- **Create extensible interfaces** for custom capabilities
812
- **Support complex interactions** with your development environment
@@ -11,6 +15,6 @@ description: "Model Context Protocol servers provide specialized functionality:"
1115

1216
![MCP Blocks Overview](/images/customization/images/mcp-blocks-overview-c9a104f9b586779c156f9cf34da197c2.png)
1317

14-
## Learn More
18+
## Learn More About MCP Blocks
1519

1620
Learn more in the [MCP deep dive](/customize/deep-dives/mcp), and view [`mcpServers`](/reference#mcpservers) in the YAML Reference for more details.

docs/customization/models.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ description: "These blocks form the foundation of the entire assistant experienc
1212

1313
![Model Blocks Overview](/images/customization/images/model-blocks-overview-36c30e7e01928d7a9b5b26ff1639c34b.png)
1414

15-
## Learn More
15+
## Learn More About Model Blocks
1616

17-
Continue supports [many model providers](/customization/models#openai), including Anthropic, OpenAI, Gemini, Ollama, Amazon Bedrock, Azure, xAI, DeepSeek, and more. Models can have various roles like `chat`, `edit`, `apply`, `autocomplete`, `embed`, and `rerank`.
17+
Continue supports [many model providers](/customize/model-providers/top-level/openai), including Anthropic, OpenAI, Gemini, Ollama, Amazon Bedrock, Azure, xAI, DeepSeek, and more. Models can have various roles like `chat`, `edit`, `apply`, `autocomplete`, `embed`, and `rerank`.
1818

1919
Read more about roles [here](/customize/model-roles) and view [`models`](/reference#models) in the YAML Reference.

docs/customization/overview.mdx

Lines changed: 26 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,39 @@
11
---
2-
title: "Overview"
3-
description: "Continue can be deeply customized. For example you might:"
2+
title: "Customization Overview"
3+
description: "Learn how to customize Continue with model providers, context providers, slash commands, and tools to create your perfect AI coding assistant"
44
---
55

6-
- **Change your Model Provider**. Continue allows you to choose your favorite or even add multiple model providers. This allows you to use different models for different tasks, or to try another model if you're not happy with the results from your current model. Continue supports all of the popular model providers, including OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can even self host your own model provider if you'd like. Learn more about [model providers](/customize/model-providers/top-level/openai).
7-
- **Select different models for specific tasks**. Different Continue features can use different models. We call these _model roles_. For example, you can use a different model for chat than you do for autocomplete. Learn more about [model roles](/customize/model-roles).
8-
- **Add a Context Provider**. Context providers allow you to add information to your prompts, giving your LLM additional context to work with. Context providers allow you to reference snippets from your codebase, or lookup relevant documentation, or use a search engine to find information and much more. Learn more about [context providers](/customize/custom-providers).
9-
- **Create a Slash Command**. Slash commands allow you to easily add custom functionality to Continue. You can use a slash command that allows you to generate a shell command from natural language, or perhaps generate a commit message, or create your own custom command to do whatever you want. Learn more about [slash commands](/customize/deep-dives/slash-commands).
10-
- **Call external tools and functions**. Unchain your LLM with the power of tools using [Agent](/features/agent/quick-start). Add custom tools using [MCP Servers](/customization/mcp-tools)
6+
Continue can be deeply customized to fit your specific development workflow and preferences. This guide covers the main ways you can customize Continue to enhance your coding experience.
7+
8+
## Change Your Model Provider
9+
10+
Continue allows you to choose your favorite or even add multiple model providers. This allows you to use different models for different tasks, or to try another model if you're not happy with the results from your current model. Continue supports all of the popular model providers, including OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can even self host your own model provider if you'd like. Learn more about [model providers](/customize/model-providers/top-level/openai).
11+
12+
## Select Different Models for Specific Tasks
13+
14+
Different Continue features can use different models. We call these _model roles_. For example, you can use a different model for chat than you do for autocomplete. Learn more about [model roles](/customize/model-roles).
15+
16+
## Add a Context Provider
17+
18+
Context providers allow you to add information to your prompts, giving your LLM additional context to work with. Context providers allow you to reference snippets from your codebase, or lookup relevant documentation, or use a search engine to find information and much more. Learn more about [context providers](/customize/custom-providers).
19+
20+
## Create a Slash Command
21+
22+
Slash commands allow you to easily add custom functionality to Continue. You can use a slash command that allows you to generate a shell command from natural language, or perhaps generate a commit message, or create your own custom command to do whatever you want. Learn more about [slash commands](/customize/deep-dives/slash-commands).
23+
24+
## Call External Tools and Functions
25+
26+
Unchain your LLM with the power of tools using [Agent](/features/agent/quick-start). Add custom tools using [MCP Servers](/customization/mcp-tools)
1127

1228
Whatever you choose, you'll probably start by editing your Assistant.
1329

14-
## Editing your assistant
30+
## Edit Your Assistant
1531

1632
You can easily access your assistant configuration from the Continue Chat sidebar. Open the sidebar by pressing `cmd/ctrl` + `L` (VS Code) or `cmd/ctrl` + `J` (JetBrains) and click the Assistant selector above the main chat input. Then, you can hover over an assistant and click the `new window` (hub assistants) or `gear` (local assistants) icon.
1733

1834
![configure an assistant](/images/customization/images/configure-continue-a5c8c79f3304c08353f3fc727aa5da7e.png)
1935

36+
## Manage Your Assistant
37+
2038
- See [Editing Hub Assistants](/hub/assistants/edit-an-assistant) for more details on managing your hub assistant
2139
- See the [Config Deep Dive](/reference) for more details on configuring local assistants.

docs/customization/rules.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Think of these as the guardrails for your AI coding assistants:
1111

1212
By implementing rules, you transform the AI from a generic coding assistant into a knowledgeable team member that understands your project's unique requirements and constraints.
1313

14-
### How Rules Work
14+
## How Rules Work
1515

1616
Your assistant detects rule blocks and applies the specified rules while in [Agent](/features/agent/quick-start), [Chat](/features/chat/quick-start), and [Edit](/features/edit/quick-start) modes.
1717

docs/customize/context/codebase.mdx

Lines changed: 19 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,43 +1,50 @@
11
---
2-
title: "@Codebase"
3-
description: Talk to your codebase
2+
title: "How to Set Up @Codebase Context Provider in Continue"
3+
description: Retrieve relevant context and answers from your codebase using embeddings and keyword search.
44
keywords: [talk, embeddings, reranker, codebase, experimental]
5+
sidebarTitle: "@Codebase"
56
---
67

78
Continue indexes your codebase so that it can later automatically pull in the most relevant context from throughout your workspace. This is done via a combination of embeddings-based retrieval and keyword search. By default, all embeddings are calculated locally using `transformers.js` and stored locally in `~/.continue/index`.
89

910
<Info>
1011
**Note:** `transformers.js` cannot be used in JetBrains IDEs. However, you can
1112
select a different embeddings model from [the list
12-
here](../model-roles/embeddings.mdx).
13+
here](../model-roles/embeddings).
1314
</Info>
1415

16+
## How to Use @Codebase and @Folder Context Providers
17+
1518
Currently, the codebase retrieval feature is available as the "codebase" and "folder" context providers. You can use them by typing `@Codebase` or `@Folder` in the input box, and then asking a question. The contents of the input box will be compared with the embeddings from the rest of the codebase (or folder) to determine relevant files.
1619

20+
### When @Codebase Context Provider Is Useful
21+
1722
Here are some common use cases where it can be useful:
1823

19-
- Asking high-level questions about your codebase
24+
- **Asking high-level questions about your codebase**
2025
- "How do I add a new endpoint to the server?"
2126
- "Do we use VS Code's CodeLens feature anywhere?"
2227
- "Is there any code written already to convert HTML to markdown?"
23-
- Generate code using existing samples as reference
28+
- **Generate code using existing samples as reference**
2429
- "Generate a new React component with a date picker, using the same patterns as existing components"
2530
- "Write a draft of a CLI application for this project using Python's argparse"
2631
- "Implement the `foo` method in the `bar` class, following the patterns seen in other subclasses of `baz`.
27-
- Use `@Folder` to ask questions about a specific folder, increasing the likelihood of relevant results
32+
- **Use `@Folder` to ask questions about a specific folder, increasing the likelihood of relevant results**
2833
- "What is the main purpose of this folder?"
2934
- "How do we use VS Code's CodeLens API?"
3035
- Or any of the above examples, but with `@Folder` instead of `@Codebase`
3136

37+
### When @Codebase Context Provider Is Not Useful
38+
3239
Here are use cases where it is not useful:
3340

34-
- When you need the LLM to see _literally every_ file in your codebase
41+
- **When you need the LLM to see _literally every_ file in your codebase**
3542
- "Find everywhere where the `foo` function is called"
3643
- "Review our codebase and find any spelling mistakes"
37-
- Refactoring
44+
- **Refactoring tasks**
3845
- "Add a new parameter to the `bar` function and update usages"
3946

40-
## Configuration
47+
## How to Configure @Codebase Context Provider Settings
4148

4249
There are a few options that let you configure the behavior of the `@codebase` context provider, which are the same for the `@folder` context provider:
4350

@@ -82,7 +89,7 @@ Final number of results to use after re-ranking (default: 5)
8289

8390
Whether to use re-ranking, which will allow initial selection of `nRetrieve` results, then will use an LLM to select the top `nFinal` results (default: true)
8491

85-
## Ignore files during indexing
92+
## How to Ignore Files During Indexing
8693

8794
Continue respects `.gitignore` files in order to determine which files should not be indexed. If you'd like to exclude additional files, you can add them to a `.continueignore` file, which follows the exact same rules as `.gitignore`.
8895

@@ -92,6 +99,6 @@ If you want to see exactly what files Continue has indexed, the metadata is stor
9299

93100
If you need to force a refresh of the index, reload the VS Code window with <kbd>cmd/ctrl</kbd> + <kbd>shift</kbd> + <kbd>p</kbd> + "Reload Window".
94101

95-
## Repository map
102+
## How Repository Map Enhances Codebase Understanding
96103

97-
Models in the Claude 3, Llama 3.1/3.2, Gemini 1.5, and GPT-4o families will automatically use a [repository map](../custom-providers#repository-map) during codebase retrieval, which allows the model to understand the structure of your codebase and use it to answer questions. Currently, the repository map only contains the filepaths in the codebase.
104+
Models in the Claude 3, Llama 3.1/3.2, Gemini 1.5, and GPT-4o families will automatically use a [repository map](/customize/context/codebase#repository-map) during codebase retrieval, which allows the model to understand the structure of your codebase and use it to answer questions. Currently, the repository map only contains the filepaths in the codebase.

0 commit comments

Comments
 (0)