You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/providers/bedrock.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ Roo Code supports all foundation models available through Amazon Bedrock.
53
53
54
54
For the complete, up-to-date model list with IDs and capabilities, see [AWS Bedrock's supported models documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
55
55
56
-
**Important:** Use the *model ID* (e.g., `anthropic.claude-sonnet-4-5-20250929-v1:0`) when configuring Roo Code, not the model name.
56
+
**Important:** Use the *model ID* when configuring Roo Code, not the model name.
57
57
58
58
---
59
59
@@ -84,7 +84,7 @@ Roo Code supports using the reasoning budget (extended thinking) for Anthropic's
84
84
85
85
To enable the reasoning budget:
86
86
87
-
1. **Select a supported Claude model** (e.g., `anthropic.claude-opus-4.1-20250514-v1:0`, `anthropic.claude-3-sonnet-20240229-v1:0`).
87
+
1. **Select a supported Claude model** that includes reasoning capabilities.
88
88
2. **Enable Reasoning Mode** in the model settings.
89
89
3. **Adjust the thinking budget** to control how much the model should "think".
Copy file name to clipboardExpand all lines: docs/providers/gemini.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ For the complete, up-to-date model list and capabilities, see [Google's Gemini m
46
46
3.**Enter API Key:** Paste your Gemini API key into the "Gemini API Key" field.
47
47
4.**Select Model:** Choose your desired Gemini model from the "Model" dropdown.
48
48
49
-
By default, Roo Code selects a stable Pro model (currently a Gemini 2.5 Pro variant) with a temperature of **1.0** where your provider supports it. This keeps suggestions more expressive and natural while still staying on task. If you need highly deterministic output (for example, for code generation in CI), you can lower the temperature toward `0.0`.
49
+
By default, Roo Code selects a stable Pro model with a temperature of **1.0** where your provider supports it. This keeps suggestions more expressive and natural while still staying on task. If you need highly deterministic output (for example, for code generation in CI), you can lower the temperature toward `0.0`.
Copy file name to clipboardExpand all lines: docs/providers/litellm.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,19 +48,19 @@ To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
48
48
```yaml
49
49
model_list:
50
50
# Configure Anthropic models
51
-
- model_name: claude-3-7-sonnet
51
+
- model_name: claude-sonnet
52
52
litellm_params:
53
-
model: anthropic/claude-3-7-sonnet-20250219
53
+
model: anthropic/claude-sonnet-model-id
54
54
api_key: os.environ/ANTHROPIC_API_KEY
55
55
56
56
# Configure OpenAI models
57
-
- model_name: gpt-4o
57
+
- model_name: gpt-model
58
58
litellm_params:
59
-
model: openai/gpt-4o
59
+
model: openai/gpt-model-id
60
60
api_key: os.environ/OPENAI_API_KEY
61
61
62
62
# Configure Azure OpenAI
63
-
- model_name: azure-gpt-4
63
+
- model_name: azure-model
64
64
litellm_params:
65
65
model: azure/my-deployment-name
66
66
api_base: https://your-resource.openai.azure.com/
@@ -77,7 +77,7 @@ To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
77
77
78
78
# Or quick start with a single model
79
79
export ANTHROPIC_API_KEY=your-anthropic-key
80
-
litellm --model claude-3-7-sonnet-20250219
80
+
litellm --model anthropic/claude-model-id
81
81
```
82
82
83
83
4. The proxy will run at `http://0.0.0.0:4000` by default (accessible as `http://localhost:4000`).
@@ -105,7 +105,7 @@ Once your LiteLLM server is running, you have two options for configuring it in
105
105
* Roo Code will attempt to fetch the list of available models from your LiteLLM server by querying the `${baseUrl}/v1/model/info` endpoint.
106
106
* The models displayed in the dropdown are sourced from this endpoint.
107
107
* Use the refresh button to update the model list if you've added new models to your LiteLLM server.
108
-
* If no model is selected, Roo Code defaults to `anthropic/claude-3-7-sonnet-20250219` (this is `litellmDefaultModelId`). Ensure this model (or your desired default) is configured and available on your LiteLLM server.
108
+
* If no model is selected, Roo Code will use a default model. Ensure you have configured at least one model on your LiteLLM server.
109
109
110
110
### Option 2: Using OpenAI Compatible Provider
111
111
@@ -133,7 +133,7 @@ When you configure the LiteLLM provider, Roo Code interacts with your LiteLLM se
133
133
*`supportsImages`: Determined from `model_info.supports_vision` provided by LiteLLM.
134
134
*`supportsPromptCache`: Determined from `model_info.supports_prompt_caching` provided by LiteLLM.
135
135
*`inputPrice` / `outputPrice`: Calculated from `model_info.input_cost_per_token` and `model_info.output_cost_per_token` from LiteLLM.
136
-
*`supportsComputerUse`: This flag is set to `true` if the underlying model identifier (from `litellm_params.model`, e.g., `openrouter/anthropic/claude-3.5-sonnet`) matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details).
136
+
*`supportsComputerUse`: This flag is set to `true` if the underlying model identifier matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details).
137
137
138
138
Roo Code uses default values for some of these properties if they are not explicitly provided by your LiteLLM server's `/model/info` endpoint for a given model. The defaults are:
Copy file name to clipboardExpand all lines: docs/providers/openai.md
+5-8Lines changed: 5 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,12 @@
1
1
---
2
2
sidebar_label: OpenAI
3
-
description: Connect Roo Code to OpenAI's official API for access to GPT-5, GPT-4o, o1, and o3 models with advanced reasoning capabilities and verbosity control.
3
+
description: Connect Roo Code to OpenAI's official API for access to GPTand reasoning models with advanced capabilities and verbosity control.
4
4
keywords:
5
5
- OpenAI
6
-
- GPT-5
7
-
- GPT-4o
8
-
- o1 models
9
-
- o3-mini
6
+
- GPT models
7
+
- reasoning models
10
8
- Roo Code
11
9
- AI integration
12
-
- reasoning models
13
10
- API key
14
11
- official OpenAI API
15
12
- verbosity
@@ -49,7 +46,7 @@ For the complete, up-to-date model list and capabilities, see [OpenAI's models d
49
46
1.**Open Roo Code Settings:** Click the gear icon (<Codiconname="gear" />) in the Roo Code panel.
50
47
2.**Select Provider:** Choose "OpenAI" from the "API Provider" dropdown.
51
48
3.**Enter API Key:** Paste your OpenAI API key into the "OpenAI API Key" field.
52
-
4.**Select Model:** Choose your desired model from the "Model" dropdown (defaults to `gpt-5.1`).
49
+
4.**Select Model:** Choose your desired model from the "Model" dropdown.
53
50
5.**(Optional) Base URL:** If you need to use a custom base URL, enter the URL. Most people won't need to adjust this.
54
51
55
52
---
@@ -71,7 +68,7 @@ For models that support reasoning (GPT-5, o1, o3, o4 families), you can control
71
68
-`medium` - Balanced approach
72
69
-`high` - Maximum thinking for complex problems
73
70
74
-
Some models have preset reasoning levels (e.g., `o3-high` always uses high reasoning).
71
+
Some models have preset reasoning levels that cannot be changed.
Copy file name to clipboardExpand all lines: docs/providers/vscode-lm.md
+1-5Lines changed: 1 addition & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,11 +36,7 @@ Roo Code includes *experimental* support for the [VS Code Language Model API](ht
36
36
37
37
1.**Open Roo Code Settings:** Click the gear icon (<Codiconname="gear" />) in the Roo Code panel.
38
38
2.**Select Provider:** Choose "VS Code LM API" from the "API Provider" dropdown.
39
-
3.**Select Model:** The "Language Model" dropdown will (eventually) list available models. The format is `vendor/family`. For example, if you have Copilot, you might see options like:
40
-
*`copilot - claude-3.5-sonnet`
41
-
*`copilot - o3-mini`
42
-
*`copilot - o1-ga`
43
-
*`copilot - gemini-2.0-flash`
39
+
3.**Select Model:** The "Language Model" dropdown will (eventually) list available models. The format is `vendor/family`. For example, if you have Copilot, you might see options like `copilot - <model-name>`.
Copy file name to clipboardExpand all lines: docs/providers/zai.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
2
sidebar_label: Z AI
3
-
description: Configure Z AI models in Roo Code. Access GLM-4.5 series models with region-aware routing for international and China mainland users.
3
+
description: Configure Z AI models in Roo Code. Access GLM family models with region-aware routing for international and China mainland users.
4
4
keywords:
5
5
- z ai
6
6
- zai
7
7
- zhipu ai
8
-
- glm-4.5
8
+
- glm models
9
9
- roo code
10
10
- api provider
11
11
- china ai
@@ -16,7 +16,7 @@ image: /img/social-share.jpg
16
16
17
17
# Using Z AI With Roo Code
18
18
19
-
Z AI (Zhipu AI) provides advanced language models with the GLM-4.5 series. The provider offers region-aware routing with separate endpoints for international users and China mainland users.
19
+
Z AI (Zhipu AI) provides advanced language models with the GLM family. The provider offers region-aware routing with separate endpoints for international users and China mainland users.
0 commit comments