Skip to content

Commit 3a8215a

Browse files
committed
new mcp unit
1 parent 1bc78a7 commit 3a8215a

File tree

1 file changed

+134
-0
lines changed

1 file changed

+134
-0
lines changed

units/en/unit2/continue-client.mdx

Lines changed: 134 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
# Using MCP with Local and Open Source Models
2+
3+
In this section, we'll connect MCP with local and open-source models using
4+
Continue, a tool for building AI coding assistants that works with local tools
5+
like Ollama.
6+
7+
## Setup Continue
8+
9+
You can install Continue from the VS Code marketplace.
10+
11+
*Note: Continue also has an extension for JetBrains.*
12+
13+
### VS Code extension
14+
15+
1. Click `Install` on the [Continue extension page in the Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue)
16+
2. This will open the Continue extension page in VS Code, where you will need to click `Install` again
17+
3. The Continue logo will appear on the left sidebar. For a better experience, move Continue to the right sidebar
18+
19+
![](https://images.type.ai/img_9FWEqi6ArNvSvOLbCB.gif)
20+
21+
With Continue configured, we'll move on to setting up Ollama to pull local models.
22+
23+
### Ollama local models
24+
25+
Ollama is an open-source tool that allows users to run large language models (LLMs)
26+
locally on their own computers. To use Ollama, you can [install](https://ollama.com/download) it and
27+
download the model you want to run with the `ollama run` command.
28+
29+
For example, you can download the [llama 3.1:8b](https://ollama.com/models/llama-3:1b) model with:
30+
31+
```bash
32+
ollama pull llama3.1:8b
33+
```
34+
35+
It is important that we use models that have tool calling as a built-in feature, such as Codestral and Llama 3.1x.
36+
37+
1. Create a folder called `.continue/models` at the top level of your workspace
38+
2. Add a file called `llama-max.yaml` to this folder
39+
3. Write the following contents to `llama-max.yaml` and save
40+
41+
```yaml
42+
name: Ollama Llama model
43+
version: 0.0.1
44+
schema: v1
45+
models:
46+
- provider: ollama
47+
model: llama3.1:8b
48+
defaultCompletionOptions:
49+
contextLength: 50000
50+
name: a llama3.1:8b max
51+
roles:
52+
- chat
53+
- edit
54+
```
55+
56+
By default, the max context length is `8192`. This setup includes a larger use of
57+
that context window to perform multiple MCP requests and also allows for more
58+
tokens to be used.
59+
60+
## How it works
61+
62+
### The tool handshake
63+
64+
Tools provide a powerful way for models to interface with the external world.
65+
They are provided to the model as a JSON object with a name and an arguments
66+
schema. For example, a `read_file` tool with a `filepath` argument will give the
67+
model the ability to request the contents of a specific file.
68+
69+
The following handshake describes how the Agent uses tools:
70+
71+
1. In Agent mode, available tools are sent along with `user` chat requests
72+
2. The model can choose to include a tool call in its response
73+
3. The user gives permission. This step is skipped if the policy for that tool is set to `Automatic`
74+
4. Continue calls the tool using built-in functionality or the MCP server that offers that particular tool
75+
5. Continue sends the result back to the model
76+
6. The model responds, potentially with another tool call, and step 2 begins again
77+
78+
Continue supports multiple local model providers. You can use different models
79+
for different tasks or switch models as needed. This section focuses on
80+
local-first solutions, but Continue does work with popular providers
81+
like OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can also run
82+
your own model provider.
83+
84+
### Local Model Integration with MCP
85+
86+
Now that we have everything set up, let's add an existing MCP server. Below is a quick example of setting up a new MCP server for use in your assistant:
87+
88+
1. Create a folder called `.continue/mcpServers` at the top level of your workspace
89+
2. Add a file called `playwright-mcp.yaml` to this folder
90+
3. Write the following contents to `playwright-mcp.yaml` and save
91+
92+
```yaml
93+
name: Playwright mcpServer
94+
version: 0.0.1
95+
schema: v1
96+
mcpServers:
97+
- name: Browser search
98+
command: npx
99+
args:
100+
- "@playwright/mcp@latest"
101+
```
102+
103+
Now test your MCP server by prompting the following command:
104+
105+
```
106+
1. Using playwright, navigate to https://news.ycombinator.com.
107+
108+
2. Extract the titles and URLs of the top 4 posts on the homepage.
109+
110+
3. Create a file named hn.txt in the root directory of the project. 
111+
112+
4. Save this list as plain text in the hn.txt file, with each line containing the title and URL separated by a hyphen.
113+
114+
Do not output code or instructions—just complete the task and confirm when it is done.
115+
```
116+
117+
The result will be a generated file called `hn.txt` in the current working directory.
118+
119+
![](https://images.type.ai/img_o3z9i3HnFJNfCm9bgu.png)
120+
121+
## Conclusion
122+
123+
##
124+
125+
By combining Continue with local models like Llama 3.1 and MCP servers, you've
126+
unlocked a powerful development workflow that keeps your code and data private
127+
while leveraging cutting-edge AI capabilities.
128+
129+
This setup gives you the flexibility to customize your AI assistant with
130+
specialized tools, from web automation to file management, all running entirely
131+
on your local machine. Ready to take your development workflow to the next
132+
level? Start by experimenting with different MCP servers from the [Continue Hub
133+
MCP explore page](https://hub.continue.dev/explore/mcp) and discover how
134+
local AI can transform your coding experience.

0 commit comments

Comments
 (0)