Skip to content

Commit 4ab0dfa

Browse files
committed
clean up examples
1 parent 3a8215a commit 4ab0dfa

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

units/en/unit2/continue-client.mdx

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,17 @@ like Ollama.
88

99
You can install Continue from the VS Code marketplace.
1010

11-
*Note: Continue also has an extension for JetBrains.*
11+
<Tip>
12+
*Continue also has an extension for JetBrains.*
13+
</Tip>
1214

1315
### VS Code extension
1416

1517
1. Click `Install` on the [Continue extension page in the Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue)
1618
2. This will open the Continue extension page in VS Code, where you will need to click `Install` again
1719
3. The Continue logo will appear on the left sidebar. For a better experience, move Continue to the right sidebar
1820

19-
![](https://images.type.ai/img_9FWEqi6ArNvSvOLbCB.gif)
21+
![sidebar vs code demo](https://docs.continue.dev/assets/images/move-to-right-sidebar-b2d315296198e41046fc174d8178f30a.gif)
2022

2123
With Continue configured, we'll move on to setting up Ollama to pull local models.
2224

@@ -32,7 +34,7 @@ For example, you can download the [llama 3.1:8b](https://ollama.com/models/llama
3234
ollama pull llama3.1:8b
3335
```
3436

35-
It is important that we use models that have tool calling as a built-in feature, such as Codestral and Llama 3.1x.
37+
It is important that we use models that have tool calling as a built-in feature, i.e. Codestral Qwen and Llama 3.1x.
3638

3739
1. Create a folder called `.continue/models` at the top level of your workspace
3840
2. Add a file called `llama-max.yaml` to this folder
@@ -53,9 +55,9 @@ models:
5355
- edit
5456
```
5557
56-
By default, the max context length is `8192`. This setup includes a larger use of
57-
that context window to perform multiple MCP requests and also allows for more
58-
tokens to be used.
58+
By default, the max context length is `8192` tokens. This setup includes a larger use of
59+
that context window to perform multiple MCP requests and also allotment for more
60+
tokens will be necessary.
5961

6062
## How it works
6163

@@ -107,7 +109,7 @@ Now test your MCP server by prompting the following command:
107109
108110
2. Extract the titles and URLs of the top 4 posts on the homepage.
109111
110-
3. Create a file named hn.txt in the root directory of the project.&nbsp;
112+
3. Create a file named hn.txt in the root directory of the project.
111113
112114
4. Save this list as plain text in the hn.txt file, with each line containing the title and URL separated by a hyphen.
113115
@@ -116,12 +118,10 @@ Do not output code or instructions—just complete the task and confirm when it
116118

117119
The result will be a generated file called `hn.txt` in the current working directory.
118120

119-
![](https://images.type.ai/img_o3z9i3HnFJNfCm9bgu.png)
121+
![mcp output example](https://deploy-preview-6060--continuedev.netlify.app/assets/images/mcp-playwright-50b192a2ff395f7a6cc11618c5e2d5b1.png)
120122

121123
## Conclusion
122124

123-
##
124-
125125
By combining Continue with local models like Llama 3.1 and MCP servers, you've
126126
unlocked a powerful development workflow that keeps your code and data private
127127
while leveraging cutting-edge AI capabilities.

0 commit comments

Comments
 (0)