You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: simplify provider model lists by linking to authoritative sources
Replace exhaustive model ID lists with links to provider documentation.
This eliminates maintenance burden from constantly updating model lists
and ensures users always see current, accurate information.
Changes:
- Remove detailed model catalogs from 28 provider docs
- Add links to official provider model documentation
- Preserve Roo Code-specific guidance (2-4 recommended models)
- Keep special integration features (reasoning effort, troubleshooting, etc.)
Benefits:
- No stale documentation from daily model changes
- Single source of truth (provider APIs)
- Reduced user confusion
- Lower maintenance overhead
See [Anthropic's Model Documentation](https://docs.anthropic.com/en/docs/about-claude/models) for more details on each model's capabilities.
35
+
## Available Models
36
+
37
+
Roo Code supports all Claude models available through Anthropic's API.
38
+
39
+
For the complete, up-to-date model list and capabilities, see [Anthropic's model documentation](https://docs.anthropic.com/en/docs/about-claude/models).
40
+
41
+
**Recommended for Roo Code:**
42
+
-**Sonnet models** - Best balance of performance and cost for most coding tasks (default)
43
+
-**Opus models** - Better for complex reasoning and large-scale refactoring
44
+
-**Haiku models** - Faster and more cost-effective for simpler tasks
Refer to the [Amazon Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html) for the most up-to-date list of available models and their IDs. Make sure to use the *model ID* when configuring Roo Code, not the model name.
50
+
## Available Models
51
+
52
+
Roo Code supports all foundation models available through Amazon Bedrock.
53
+
54
+
For the complete, up-to-date model list with IDs and capabilities, see [AWS Bedrock's supported models documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
55
+
56
+
**Important:** Use the *model ID* (e.g., `anthropic.claude-sonnet-4-5-20250929-v1:0`) when configuring Roo Code, not the model name.
57
+
58
+
**Recommended for Roo Code:**
59
+
- **Claude Sonnet models** - Best balance for most coding tasks (default: `anthropic.claude-sonnet-4-5-20250929-v1:0`)
60
+
- **Amazon Nova models** - Better for AWS-integrated workflows
61
+
- **Meta Llama models** - Good for open-source requirements
62
+
63
+
**Note:** Model availability varies by AWS region. Request access to specific models through the Bedrock console before use.
Copy file name to clipboardExpand all lines: docs/providers/chutes.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,11 +27,13 @@ To use Chutes AI with Roo Code, obtain an API key from the [Chutes AI platform](
27
27
28
28
---
29
29
30
-
## Supported Models
30
+
## Available Models
31
31
32
-
Roo Code will attempt to fetch the list of available models from the Chutes AI API. The specific models available will depend on Chutes AI's current offerings.
32
+
Roo Code automatically fetches all available models from Chutes AI's API.
33
33
34
-
Always refer to the official Chutes AI documentation or your dashboard for the most up-to-date list of supported models.
34
+
For the complete, up-to-date model list, see [Chutes AI's platform](https://chutes.ai/) or your account dashboard.
35
+
36
+
**Key advantage:** Free API access to multiple LLMs for experimentation and development.
Copy file name to clipboardExpand all lines: docs/providers/claude-code.md
+7-9Lines changed: 7 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -94,18 +94,16 @@ export CLAUDE_CODE_MAX_OUTPUT_TOKENS=32768 # Set to 32k tokens
94
94
95
95
---
96
96
97
-
## Supported Models
97
+
## Available Models
98
98
99
-
The Claude Code provider supports these Claude models:
99
+
The Claude Code provider supports all Claude models available through the official CLI.
100
100
101
-
-**Claude Opus 4.1** (Most capable)
102
-
-**Claude Opus 4**
103
-
-**Claude Sonnet 4** (Latest, recommended)
104
-
-**Claude 3.7 Sonnet**
105
-
-**Claude 3.5 Sonnet**
106
-
-**Claude 3.5 Haiku** (Fast responses)
101
+
Model availability depends on your Claude CLI subscription and plan. See [Anthropic's CLI documentation](https://docs.anthropic.com/en/docs/claude-code/setup) for details.
107
102
108
-
The specific models available depend on your Claude CLI subscription and plan.
103
+
**Recommended:**
104
+
-**Sonnet models** - Best balance for most coding tasks (latest recommended)
105
+
-**Opus models** - Better for complex reasoning
106
+
-**Haiku models** - Faster responses when speed matters
Copy file name to clipboardExpand all lines: docs/providers/featherless.md
+9-15Lines changed: 9 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,26 +32,20 @@ Featherless AI provides access to high-performance open-source models including
32
32
33
33
---
34
34
35
-
## Supported Models
35
+
## Available Models
36
36
37
-
Roo Code supports the following Featherless models:
37
+
Roo Code automatically fetches all available models from Featherless AI's API.
38
38
39
-
*`deepseek-ai/DeepSeek-R1-0528` (Default) - DeepSeek R1 reasoning model with `<think>` tag support
40
-
*`deepseek-ai/DeepSeek-V3-0324` - DeepSeek V3 model
41
-
*`moonshotai/Kimi-K2-Instruct` - Kimi K2 instruction-following model
42
-
*`openai/gpt-oss-120b` - GPT-OSS 120B parameter model
43
-
*`Qwen/Qwen3-Coder-480B-A35B-Instruct` - Qwen3 specialized coding model
39
+
For the complete, up-to-date model list, see [Featherless AI](https://featherless.ai).
44
40
45
-
### Model Capabilities
41
+
**All models are currently FREE** with no usage costs.
46
42
47
-
All models support:
48
-
-**Context Window:**~32,678 tokens
49
-
-**Max Output:**4,096 tokens
50
-
-**Pricing:**Free (no cost for input/output tokens)
43
+
**Recommended for Roo Code:**
44
+
-**DeepSeek R1 models**- Best for complex reasoning with `<think>` tag support (default)
45
+
-**Qwen3 Coder**- Better for specialized code generation tasks
46
+
-**Kimi K2**- Good for balanced instruction-following
51
47
52
-
:::info
53
-
**DeepSeek R1 Models:** The DeepSeek R1 models (like `DeepSeek-R1-0528`) include special reasoning capabilities with `<think>` tag support for step-by-step problem solving. These models automatically separate reasoning from regular output.
54
-
:::
48
+
**Note:** Most models have ~32K context window and 4K max output. No image support or prompt caching available.
0 commit comments