You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1.**Download and Install Ollama:** Download the Ollama installer for your operating system from the [Ollama website](https://ollama.com/). Follow the installation instructions.
7
+
1.**Download and Install Ollama:**Download the Ollama installer for your operating system from the [Ollama website](https://ollama.com/). Follow the installation instructions. Make sure Ollama is running
8
8
9
-
2.**Download a Model:** Ollama supports many different models. You can find a list of available models on the [Ollama website](https://ollama.com/library). Some recommended models for coding tasks include:
9
+
```bash
10
+
ollama serve
11
+
```
12
+
13
+
2. **Download a Model:** Ollama supports many different models. You can find a list of available models on the [Ollama website](https://ollama.com/library). Some recommended models for coding tasks include:
10
14
11
15
*`codellama:7b-code` (good starting point, smaller)
12
16
*`codellama:13b-code` (better quality, larger)
13
17
*`codellama:34b-code` (even better quality, very large)
*`deepseek-coder:6.7b-base` (good for coding tasks)
21
+
*`llama3:8b-instruct-q5_1` (good for general tasks)
16
22
17
23
To download a model, open your terminal and run:
18
24
19
25
```bash
20
-
ollama run<model_name>
26
+
ollama pull<model_name>
21
27
```
22
28
23
29
For example:
24
30
25
31
```bash
26
-
ollama run codellama:7b-code
32
+
ollama pull qwen2.5-coder:32b
33
+
```
34
+
35
+
3. **Configure the Model:** by default, Ollama uses a context window size of 2048 tokens, which is too small for Roo Code requests. You need to have at least 12k to get decent results, ideally - 32k. To configure a model, you actually need to set its parameters and save a copy of it.
36
+
37
+
Load the model (we will use `qwen2.5-coder:32b` as an example):
38
+
39
+
```bash
40
+
ollama run qwen2.5-coder:32b
27
41
```
28
-
**Note:** The first time you download a model, it may take a while, depending on the model size and your internet connection. You need to make sure Ollama is up and running before connecting to it.
29
42
30
-
3. **Start the Ollama server.** By default, Ollama will be running on `http://localhost:11434`
43
+
Change context size parameter:
31
44
32
-
## Configuration in Roo Code
45
+
```bash
46
+
/set parameter num_ctx 32768
47
+
```
48
+
49
+
Save the model with a new name:
50
+
51
+
```bash
52
+
/save your_model_name
53
+
```
33
54
34
-
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
35
-
2. **Select Provider:** Choose "Ollama" from the "API Provider" dropdown.
36
-
3. **Enter Model ID:** Enter the name of the model you downloaded (e.g., `codellama:7b-code`).
37
-
4. **(Optional) Base URL:** By default, Roo Code will connect to Ollama at `http://localhost:11434`. If you've configured Ollama to use a different address or port, enter the full URL here.
55
+
4. **Configure Roo Code:**
56
+
* Open the Roo Code sidebar (<Codicon name="rocket" /> icon).
57
+
* Click the settings gear icon (<Codicon name="gear" />).
58
+
* Select "ollama" as the API Provider.
59
+
* Enter the Model name from the previous step (e.g., `your_model_name`).
60
+
* (Optional) You can configure the base URL if you're running Ollama on a different machine. The default is `http://localhost:11434`.
61
+
* (Optional) Configure Model context size in Advanced settings, so Roo Code knows how to manage its sliding window.
38
62
39
63
## Tips and Notes
40
64
41
65
* **Resource Requirements:** Running large language models locally can be resource-intensive. Make sure your computer meets the minimum requirements for the model you choose.
42
66
* **Model Selection:** Experiment with different models to find the one that best suits your needs.
43
67
* **Offline Use:** Once you've downloaded a model, you can use Roo Code offline with that model.
44
-
***Ollama Documentation:** Refer to the [Ollama documentation](https://ollama.com/docs) for more information on installing, configuring, and using Ollama.
68
+
***Ollama Documentation:** Refer to the [Ollama documentation](https://ollama.com/docs) for more information on installing, configuring, and using Ollama.
0 commit comments