You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: units/en/unit2/continue-client.mdx
+13-4Lines changed: 13 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ You can install Continue from the VS Code marketplace.
20
20
21
21

22
22
23
-
With Continue configured, we'll move on to setting up Ollama to pull local models.
23
+
With Continue configured, we'll move on to setting up Ollama to pull local models.
24
24
25
25
### Ollama local models
26
26
@@ -33,6 +33,14 @@ For example, you can download the [llama 3.1:8b](https://ollama.com/models/llama
33
33
```bash
34
34
ollama pull llama3.1:8b
35
35
```
36
+
<Tip>
37
+
It is possible
38
+
to use other local model provides, like [Llama.cpp](https://docs.continue.dev/customize/model-providers/more/llamacpp), and [LLmstudio](https://docs.continue.dev/customize/model-providers/more/lmstudio) by updating the
39
+
model provider in the configuration files below. However, Continue has been
40
+
tested with Ollama and it is recommended to use it for the best experience.
41
+
42
+
Details on all available model providers can be found in the [Continue documentation](https://docs.continue.dev/customize/model-providers).
43
+
</Tip>
36
44
37
45
It is important that we use models that have tool calling as a built-in feature, i.e. Codestral Qwen and Llama 3.1x.
38
46
@@ -55,9 +63,8 @@ models:
55
63
- edit
56
64
```
57
65
58
-
By default, the max context length is `8192` tokens. This setup includes a larger use of
59
-
that context window to perform multiple MCP requests and also allotment for more
60
-
tokens will be necessary.
66
+
By default, each model has a max context length, in this case it is `128000` tokens. This setup includes a larger use of
67
+
that context window to perform multiple MCP requests and needs to be able to handle more tokens.
61
68
62
69
## How it works
63
70
@@ -68,6 +75,8 @@ They are provided to the model as a JSON object with a name and an arguments
68
75
schema. For example, a `read_file` tool with a `filepath` argument will give the
69
76
model the ability to request the contents of a specific file.
0 commit comments