Skip to content

Commit 5642abd

Browse files
authored
Update ollama.md
1 parent e902130 commit 5642abd

File tree

1 file changed

+36
-12
lines changed

1 file changed

+36
-12
lines changed

docs/providers/ollama.md

Lines changed: 36 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,41 +4,65 @@ Roo Code supports running models locally using [Ollama](https://ollama.com/). Th
44

55
## Setting up Ollama
66

7-
1. **Download and Install Ollama:** Download the Ollama installer for your operating system from the [Ollama website](https://ollama.com/). Follow the installation instructions.
7+
1. **Download and Install Ollama:** Download the Ollama installer for your operating system from the [Ollama website](https://ollama.com/). Follow the installation instructions. Make sure Ollama is running
88

9-
2. **Download a Model:** Ollama supports many different models. You can find a list of available models on the [Ollama website](https://ollama.com/library). Some recommended models for coding tasks include:
9+
```bash
10+
ollama serve
11+
```
12+
13+
2. **Download a Model:** Ollama supports many different models. You can find a list of available models on the [Ollama website](https://ollama.com/library). Some recommended models for coding tasks include:
1014

1115
* `codellama:7b-code` (good starting point, smaller)
1216
* `codellama:13b-code` (better quality, larger)
1317
* `codellama:34b-code` (even better quality, very large)
18+
* `qwen2.5-coder:32b`
1419
* `mistralai/Mistral-7B-Instruct-v0.1` (good general-purpose model)
1520
* `deepseek-coder:6.7b-base` (good for coding tasks)
21+
* `llama3:8b-instruct-q5_1` (good for general tasks)
1622

1723
To download a model, open your terminal and run:
1824

1925
```bash
20-
ollama run <model_name>
26+
ollama pull <model_name>
2127
```
2228

2329
For example:
2430

2531
```bash
26-
ollama run codellama:7b-code
32+
ollama pull qwen2.5-coder:32b
33+
```
34+
35+
3. **Configure the Model:** by default, Ollama uses a context window size of 2048 tokens, which is too small for Roo Code requests. You need to have at least 12k to get decent results, ideally - 32k. To configure a model, you actually need to set its parameters and save a copy of it.
36+
37+
Load the model (we will use `qwen2.5-coder:32b` as an example):
38+
39+
```bash
40+
ollama run qwen2.5-coder:32b
2741
```
28-
**Note:** The first time you download a model, it may take a while, depending on the model size and your internet connection. You need to make sure Ollama is up and running before connecting to it.
2942

30-
3. **Start the Ollama server.** By default, Ollama will be running on `http://localhost:11434`
43+
Change context size parameter:
3144

32-
## Configuration in Roo Code
45+
```bash
46+
/set parameter num_ctx 32768
47+
```
48+
49+
Save the model with a new name:
50+
51+
```bash
52+
/save your_model_name
53+
```
3354

34-
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
35-
2. **Select Provider:** Choose "Ollama" from the "API Provider" dropdown.
36-
3. **Enter Model ID:** Enter the name of the model you downloaded (e.g., `codellama:7b-code`).
37-
4. **(Optional) Base URL:** By default, Roo Code will connect to Ollama at `http://localhost:11434`. If you've configured Ollama to use a different address or port, enter the full URL here.
55+
4. **Configure Roo Code:**
56+
* Open the Roo Code sidebar (<Codicon name="rocket" /> icon).
57+
* Click the settings gear icon (<Codicon name="gear" />).
58+
* Select "ollama" as the API Provider.
59+
* Enter the Model name from the previous step (e.g., `your_model_name`).
60+
* (Optional) You can configure the base URL if you're running Ollama on a different machine. The default is `http://localhost:11434`.
61+
* (Optional) Configure Model context size in Advanced settings, so Roo Code knows how to manage its sliding window.
3862
3963
## Tips and Notes
4064
4165
* **Resource Requirements:** Running large language models locally can be resource-intensive. Make sure your computer meets the minimum requirements for the model you choose.
4266
* **Model Selection:** Experiment with different models to find the one that best suits your needs.
4367
* **Offline Use:** Once you've downloaded a model, you can use Roo Code offline with that model.
44-
* **Ollama Documentation:** Refer to the [Ollama documentation](https://ollama.com/docs) for more information on installing, configuring, and using Ollama.
68+
* **Ollama Documentation:** Refer to the [Ollama documentation](https://ollama.com/docs) for more information on installing, configuring, and using Ollama.

0 commit comments

Comments
 (0)