Skip to content

Commit 9eaf740

Browse files
SannidhyaSannidhya
authored andcommitted
docs: remove specific model IDs and version numbers from provider documentation
- Remove hardcoded model names/IDs that quickly become outdated - Replace with generic references to model families and catalogs - Keep examples only where necessary for technical understanding (file formats, syntax) Updated files: - vscode-lm.md: Remove specific model examples - openai.md: Clean keywords and remove version-specific references - bedrock.md: Remove dated model ID examples - litellm.md: Use generic placeholders in config examples - vercel-ai-gateway.md: Remove default model reference - qwen-code.md: Replace specific model list with catalog reference - gemini.md: Remove version-specific model mention - zai.md: Change from 'GLM-4.5' to 'GLM family'
1 parent 2dd134d commit 9eaf740

File tree

8 files changed

+22
-33
lines changed

8 files changed

+22
-33
lines changed

docs/providers/bedrock.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Roo Code supports all foundation models available through Amazon Bedrock.
5353

5454
For the complete, up-to-date model list with IDs and capabilities, see [AWS Bedrock's supported models documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
5555
56-
**Important:** Use the *model ID* (e.g., `anthropic.claude-sonnet-4-5-20250929-v1:0`) when configuring Roo Code, not the model name.
56+
**Important:** Use the *model ID* when configuring Roo Code, not the model name.
5757
5858
---
5959
@@ -84,7 +84,7 @@ Roo Code supports using the reasoning budget (extended thinking) for Anthropic's
8484
8585
To enable the reasoning budget:
8686
87-
1. **Select a supported Claude model** (e.g., `anthropic.claude-opus-4.1-20250514-v1:0`, `anthropic.claude-3-sonnet-20240229-v1:0`).
87+
1. **Select a supported Claude model** that includes reasoning capabilities.
8888
2. **Enable Reasoning Mode** in the model settings.
8989
3. **Adjust the thinking budget** to control how much the model should "think".
9090

docs/providers/gemini.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ For the complete, up-to-date model list and capabilities, see [Google's Gemini m
4646
3. **Enter API Key:** Paste your Gemini API key into the "Gemini API Key" field.
4747
4. **Select Model:** Choose your desired Gemini model from the "Model" dropdown.
4848

49-
By default, Roo Code selects a stable Pro model (currently a Gemini 2.5 Pro variant) with a temperature of **1.0** where your provider supports it. This keeps suggestions more expressive and natural while still staying on task. If you need highly deterministic output (for example, for code generation in CI), you can lower the temperature toward `0.0`.
49+
By default, Roo Code selects a stable Pro model with a temperature of **1.0** where your provider supports it. This keeps suggestions more expressive and natural while still staying on task. If you need highly deterministic output (for example, for code generation in CI), you can lower the temperature toward `0.0`.
5050

5151
---
5252

docs/providers/litellm.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -48,19 +48,19 @@ To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
4848
```yaml
4949
model_list:
5050
# Configure Anthropic models
51-
- model_name: claude-3-7-sonnet
51+
- model_name: claude-sonnet
5252
litellm_params:
53-
model: anthropic/claude-3-7-sonnet-20250219
53+
model: anthropic/claude-sonnet-model-id
5454
api_key: os.environ/ANTHROPIC_API_KEY
5555

5656
# Configure OpenAI models
57-
- model_name: gpt-4o
57+
- model_name: gpt-model
5858
litellm_params:
59-
model: openai/gpt-4o
59+
model: openai/gpt-model-id
6060
api_key: os.environ/OPENAI_API_KEY
6161

6262
# Configure Azure OpenAI
63-
- model_name: azure-gpt-4
63+
- model_name: azure-model
6464
litellm_params:
6565
model: azure/my-deployment-name
6666
api_base: https://your-resource.openai.azure.com/
@@ -77,7 +77,7 @@ To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
7777

7878
# Or quick start with a single model
7979
export ANTHROPIC_API_KEY=your-anthropic-key
80-
litellm --model claude-3-7-sonnet-20250219
80+
litellm --model anthropic/claude-model-id
8181
```
8282

8383
4. The proxy will run at `http://0.0.0.0:4000` by default (accessible as `http://localhost:4000`).
@@ -105,7 +105,7 @@ Once your LiteLLM server is running, you have two options for configuring it in
105105
* Roo Code will attempt to fetch the list of available models from your LiteLLM server by querying the `${baseUrl}/v1/model/info` endpoint.
106106
* The models displayed in the dropdown are sourced from this endpoint.
107107
* Use the refresh button to update the model list if you've added new models to your LiteLLM server.
108-
* If no model is selected, Roo Code defaults to `anthropic/claude-3-7-sonnet-20250219` (this is `litellmDefaultModelId`). Ensure this model (or your desired default) is configured and available on your LiteLLM server.
108+
* If no model is selected, Roo Code will use a default model. Ensure you have configured at least one model on your LiteLLM server.
109109

110110
### Option 2: Using OpenAI Compatible Provider
111111

@@ -133,7 +133,7 @@ When you configure the LiteLLM provider, Roo Code interacts with your LiteLLM se
133133
* `supportsImages`: Determined from `model_info.supports_vision` provided by LiteLLM.
134134
* `supportsPromptCache`: Determined from `model_info.supports_prompt_caching` provided by LiteLLM.
135135
* `inputPrice` / `outputPrice`: Calculated from `model_info.input_cost_per_token` and `model_info.output_cost_per_token` from LiteLLM.
136-
* `supportsComputerUse`: This flag is set to `true` if the underlying model identifier (from `litellm_params.model`, e.g., `openrouter/anthropic/claude-3.5-sonnet`) matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details).
136+
* `supportsComputerUse`: This flag is set to `true` if the underlying model identifier matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details).
137137

138138
Roo Code uses default values for some of these properties if they are not explicitly provided by your LiteLLM server's `/model/info` endpoint for a given model. The defaults are:
139139
* `maxTokens`: 8192

docs/providers/openai.md

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,12 @@
11
---
22
sidebar_label: OpenAI
3-
description: Connect Roo Code to OpenAI's official API for access to GPT-5, GPT-4o, o1, and o3 models with advanced reasoning capabilities and verbosity control.
3+
description: Connect Roo Code to OpenAI's official API for access to GPT and reasoning models with advanced capabilities and verbosity control.
44
keywords:
55
- OpenAI
6-
- GPT-5
7-
- GPT-4o
8-
- o1 models
9-
- o3-mini
6+
- GPT models
7+
- reasoning models
108
- Roo Code
119
- AI integration
12-
- reasoning models
1310
- API key
1411
- official OpenAI API
1512
- verbosity
@@ -49,7 +46,7 @@ For the complete, up-to-date model list and capabilities, see [OpenAI's models d
4946
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
5047
2. **Select Provider:** Choose "OpenAI" from the "API Provider" dropdown.
5148
3. **Enter API Key:** Paste your OpenAI API key into the "OpenAI API Key" field.
52-
4. **Select Model:** Choose your desired model from the "Model" dropdown (defaults to `gpt-5.1`).
49+
4. **Select Model:** Choose your desired model from the "Model" dropdown.
5350
5. **(Optional) Base URL:** If you need to use a custom base URL, enter the URL. Most people won't need to adjust this.
5451

5552
---
@@ -71,7 +68,7 @@ For models that support reasoning (GPT-5, o1, o3, o4 families), you can control
7168
- `medium` - Balanced approach
7269
- `high` - Maximum thinking for complex problems
7370

74-
Some models have preset reasoning levels (e.g., `o3-high` always uses high reasoning).
71+
Some models have preset reasoning levels that cannot be changed.
7572

7673
### Verbosity Control
7774

docs/providers/qwen-code.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,11 +31,9 @@ Access Alibaba's Qwen3 Coder models through OAuth authentication with automatic
3131

3232
## Available Models
3333

34-
Both Qwen3 Coder models feature massive 1M context windows and 65K max output tokens.
34+
Qwen3 Coder models feature massive 1M context windows and 65K max output tokens.
3535

36-
**Available models:**
37-
- **qwen3-coder-plus** - High-performance coding model (default)
38-
- **qwen3-coder-flash** - Speed-optimized variant
36+
For the complete, up-to-date model list, see the Qwen Code provider's model catalog when you configure the provider in Roo Code.
3937

4038
---
4139

docs/providers/vercel-ai-gateway.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,6 @@ Roo Code automatically fetches all available models from Vercel AI Gateway's API
4141

4242
For the complete, up-to-date model catalog with capabilities, see [Vercel's AI Gateway models page](https://vercel.com/ai-gateway/models).
4343

44-
**Default:** `anthropic/claude-sonnet-4` if no model is selected.
45-
4644
---
4745

4846
## Configuration in Roo Code

docs/providers/vscode-lm.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,7 @@ Roo Code includes *experimental* support for the [VS Code Language Model API](ht
3636

3737
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
3838
2. **Select Provider:** Choose "VS Code LM API" from the "API Provider" dropdown.
39-
3. **Select Model:** The "Language Model" dropdown will (eventually) list available models. The format is `vendor/family`. For example, if you have Copilot, you might see options like:
40-
* `copilot - claude-3.5-sonnet`
41-
* `copilot - o3-mini`
42-
* `copilot - o1-ga`
43-
* `copilot - gemini-2.0-flash`
39+
3. **Select Model:** The "Language Model" dropdown will (eventually) list available models. The format is `vendor/family`. For example, if you have Copilot, you might see options like `copilot - <model-name>`.
4440

4541
---
4642

docs/providers/zai.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
22
sidebar_label: Z AI
3-
description: Configure Z AI models in Roo Code. Access GLM-4.5 series models with region-aware routing for international and China mainland users.
3+
description: Configure Z AI models in Roo Code. Access GLM family models with region-aware routing for international and China mainland users.
44
keywords:
55
- z ai
66
- zai
77
- zhipu ai
8-
- glm-4.5
8+
- glm models
99
- roo code
1010
- api provider
1111
- china ai
@@ -16,7 +16,7 @@ image: /img/social-share.jpg
1616

1717
# Using Z AI With Roo Code
1818

19-
Z AI (Zhipu AI) provides advanced language models with the GLM-4.5 series. The provider offers region-aware routing with separate endpoints for international users and China mainland users.
19+
Z AI (Zhipu AI) provides advanced language models with the GLM family. The provider offers region-aware routing with separate endpoints for international users and China mainland users.
2020

2121
**Website:** [https://z.ai/model-api](https://z.ai/model-api) (International) | [https://open.bigmodel.cn/](https://open.bigmodel.cn/) (China)
2222

0 commit comments

Comments
 (0)