Skip to content

Commit fce58f6

Browse files
authored
Added alternative ollama command line create (#50)
Created runtime vs command line options for creating the ollama model following the existing example. Signed-off-by: Rob Sherman <[email protected]>
1 parent 4d29565 commit fce58f6

File tree

1 file changed

+26
-2
lines changed

1 file changed

+26
-2
lines changed

docs/advanced-usage/local-models.md

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Roo Code currently supports two main local model providers:
5353

5454
3. **Configure the Model:** by default, Ollama uses a context window size of 2048 tokens, which is too small for Roo Code requests. You need to have at least 12k to get decent results, ideally - 32k. To configure a model, you actually need to set its parameters and save a copy of it.
5555

56+
##### Using Ollama runtime
5657
Load the model (we will use `qwen2.5-coder:32b` as an example):
5758

5859
```bash
@@ -70,13 +71,36 @@ Roo Code currently supports two main local model providers:
7071
```bash
7172
/save your_model_name
7273
```
73-
74+
##### Using Ollama command line
75+
Alternatively, you can write all your settings into a text file and generate the model in the command-line.
76+
77+
78+
Create a text file with model settings, and save it (~/qwen2.5-coder-32k.txt). Here we've only used the `num_ctx` parameter, but you could include more parameters on the next line using the `PARAMETER name value` syntax.
79+
80+
```text
81+
FROM qwen2.5-coder:32b
82+
# sets the context window size to 32768, this controls how many tokens the LLM can use as context to generate the next token
83+
PARAMETER num_ctx 32768
84+
```
85+
Change directory to the `.ollama/models` directory. On most Macs, thats `~/.ollama/models` by default (`%HOMEPATH%\.ollama` on Windows).
86+
87+
```bash
88+
cd ~/.ollama/models
89+
```
90+
91+
Create your model from the settings text file you created. The syntax is `ollama create (name of the model you want to see) -f (text file with settings)`
92+
93+
```bash
94+
ollama create qwen2.5-coder-32k -f ~/qwen2.5-coder-32k.txt
95+
```
96+
97+
7498
7599
4. **Configure Roo Code:**
76100
* Open the Roo Code sidebar (<Codicon name="rocket" /> icon).
77101
* Click the settings gear icon (<Codicon name="gear" />).
78102
* Select "ollama" as the API Provider.
79-
* Enter the Model name from the previous step (e.g., `your_model_name`).
103+
* Enter the Model name from the previous step (e.g., `your_model_name`) or choose it from the radio button list that should appear below `Model ID` if Ollama is currently running.
80104
* (Optional) You can configure the base URL if you're running Ollama on a different machine. The default is `http://localhost:11434`.
81105
* (Optional) Configure Model context size in Advanced settings, so Roo Code knows how to manage its sliding window.
82106

0 commit comments

Comments
 (0)