Skip to content

Commit d5a9b9f

Browse files
SannidhyaSahSannidhya
andauthored
docs: simplify provider model lists by linking to authoritative sources (#432)
Co-authored-by: Sannidhya <[email protected]>
1 parent 3500423 commit d5a9b9f

29 files changed

+120
-577
lines changed

docs/providers/anthropic.md

Lines changed: 5 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -32,23 +32,11 @@ Anthropic is an AI safety and research company that builds reliable, interpretab
3232

3333
---
3434

35-
## Supported Models
36-
37-
Roo Code supports the following Anthropic Claude models:
38-
39-
* `claude-sonnet-4-5` (Latest, Recommended)
40-
* `claude-opus-4-5-20251101`
41-
* `claude-opus-4-1-20250805`
42-
* `claude-opus-4-20250514`
43-
* `claude-sonnet-4-20250514`
44-
* `claude-3-7-sonnet-20250219`
45-
* `claude-3-7-sonnet-20250219:thinking` (Extended Thinking variant)
46-
* `claude-3-5-sonnet-20241022`
47-
* `claude-3-5-haiku-20241022`
48-
* `claude-3-opus-20240229`
49-
* `claude-3-haiku-20240307`
50-
51-
See [Anthropic's Model Documentation](https://docs.anthropic.com/en/docs/about-claude/models) for more details on each model's capabilities.
35+
## Available Models
36+
37+
Roo Code supports all Claude models available through Anthropic's API.
38+
39+
For the complete, up-to-date model list and capabilities, see [Anthropic's model documentation](https://docs.anthropic.com/en/docs/about-claude/models).
5240

5341
---
5442

docs/providers/bedrock.md

Lines changed: 7 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -47,53 +47,13 @@ You have two main options for configuring AWS credentials:
4747

4848
---
4949

50-
## Supported Models
51-
52-
Roo Code supports the following models through Bedrock (based on source code):
53-
54-
* **Amazon:**
55-
* `amazon.nova-pro-v1:0`
56-
* `amazon.nova-pro-latency-optimized-v1:0`
57-
* `amazon.nova-lite-v1:0`
58-
* `amazon.nova-micro-v1:0`
59-
* `amazon.titan-text-lite-v1:0`
60-
* `amazon.titan-text-express-v1:0`
61-
* `amazon.titan-text-embeddings-v1:0`
62-
* `amazon.titan-text-embeddings-v2:0`
63-
* **Anthropic:**
64-
* `anthropic.claude-sonnet-4-5-20250929-v1:0` (Default)
65-
* `anthropic.claude-opus-4.1-20250514-v1:0`
66-
* `anthropic.claude-opus-4-20250514-v1:0`
67-
* `anthropic.claude-sonnet-4-20250514-v1:0`
68-
* `anthropic.claude-3-7-sonnet-20250219-v1:0`
69-
* `anthropic.claude-3-5-sonnet-20241022-v2:0`
70-
* `anthropic.claude-3-5-haiku-20241022-v1:0`
71-
* `anthropic.claude-3-5-sonnet-20240620-v1:0`
72-
* `anthropic.claude-3-opus-20240229-v1:0`
73-
* `anthropic.claude-3-sonnet-20240229-v1:0`
74-
* `anthropic.claude-3-haiku-20240307-v1:0`
75-
* `anthropic.claude-2-1-v1:0`
76-
* `anthropic.claude-2-0-v1:0`
77-
* `anthropic.claude-instant-v1:0`
78-
* **DeepSeek:**
79-
* `deepseek.r1-v1:0`
80-
* **Meta:**
81-
* `meta.llama3-3-70b-instruct-v1:0`
82-
* `meta.llama3-2-90b-instruct-v1:0`
83-
* `meta.llama3-2-11b-instruct-v1:0`
84-
* `meta.llama3-2-3b-instruct-v1:0`
85-
* `meta.llama3-2-1b-instruct-v1:0`
86-
* `meta.llama3-1-405b-instruct-v1:0`
87-
* `meta.llama3-1-70b-instruct-v1:0`
88-
* `meta.llama3-1-70b-instruct-latency-optimized-v1:0`
89-
* `meta.llama3-1-8b-instruct-v1:0`
90-
* `meta.llama3-70b-instruct-v1:0`
91-
* `meta.llama3-8b-instruct-v1:0`
92-
* **OpenAI:**
93-
* `openai.gpt-oss-20b-1:0`
94-
* `openai.gpt-oss-120b-1:0`
95-
96-
Refer to the [Amazon Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html) for the most up-to-date list of available models and their IDs. Make sure to use the *model ID* when configuring Roo Code, not the model name.
50+
## Available Models
51+
52+
Roo Code supports all foundation models available through Amazon Bedrock.
53+
54+
For the complete, up-to-date model list with IDs and capabilities, see [AWS Bedrock's supported models documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
55+
56+
**Important:** Use the *model ID* (e.g., `anthropic.claude-sonnet-4-5-20250929-v1:0`) when configuring Roo Code, not the model name.
9757
9858
---
9959

docs/providers/cerebras.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,9 +41,15 @@ Cerebras AI specializes in extremely fast inference speeds (up to 2600 tokens/se
4141

4242
---
4343

44+
## Available Models
45+
46+
Roo Code automatically fetches all available models from Cerebras AI's API.
47+
48+
For the complete, up-to-date model list and pricing, see [Cerebras Cloud](https://cloud.cerebras.ai?utm_source=roocode).
49+
50+
---
51+
4452
## Tips and Notes
4553

4654
* **Performance:** Cerebras specializes in extremely fast inference speeds, making it ideal for real-time coding assistance.
47-
* **Free Tier:** The `qwen-3-coder-480b-free` model provides access to high-performance inference at no cost with rate limits.
48-
* **Context Windows:** Models support context windows ranging from 64K to 128K tokens.
49-
* **Pricing:** Refer to the [Cerebras Cloud](https://cloud.cerebras.ai?utm_source=roocode) dashboard for the latest pricing information.
55+
* **Pricing:** Check the [Cerebras Cloud](https://cloud.cerebras.ai?utm_source=roocode) dashboard for current pricing and free tier details.

docs/providers/chutes.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,11 @@ To use Chutes AI with Roo Code, obtain an API key from the [Chutes AI platform](
2727

2828
---
2929

30-
## Supported Models
30+
## Available Models
3131

32-
Roo Code will attempt to fetch the list of available models from the Chutes AI API. The specific models available will depend on Chutes AI's current offerings.
32+
Roo Code automatically fetches all available models from Chutes AI's API.
3333

34-
Always refer to the official Chutes AI documentation or your dashboard for the most up-to-date list of supported models.
34+
For the complete, up-to-date model list, see [Chutes AI's platform](https://chutes.ai/) or your account dashboard.
3535

3636
---
3737

docs/providers/claude-code.md

Lines changed: 3 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -42,21 +42,6 @@ If this environment variable is set on your system, the `claude` tool may use it
4242

4343
---
4444

45-
## Key Features
46-
- **Direct CLI Access**: Uses Anthropic's official Claude CLI tool for model interactions.
47-
- **Advanced Reasoning**: Full support for Claude's thinking mode and reasoning capabilities.
48-
- **Cost Transparency**: Shows exact usage costs as reported by the CLI.
49-
- **Flexible Configuration**: Works with your existing Claude CLI setup.
50-
51-
---
52-
53-
## Why Use This Provider
54-
55-
- **No API Keys**: Uses your existing Claude CLI authentication.
56-
- **Cost Benefits**: Leverage CLI subscription rates and transparent cost reporting.
57-
- **Latest Features**: Access new Claude capabilities as they're released in the CLI.
58-
- **Advanced Reasoning**: Full support for Claude's thinking modes.
59-
6045
## How it Works
6146

6247
The Claude Code provider works by:
@@ -94,19 +79,11 @@ export CLAUDE_CODE_MAX_OUTPUT_TOKENS=32768 # Set to 32k tokens
9479

9580
---
9681

97-
## Supported Models
98-
99-
The Claude Code provider supports these Claude models:
100-
101-
- **Claude Opus 4.1** (Most capable)
102-
- **Claude Opus 4**
103-
- **Claude Sonnet 4** (Latest, recommended)
104-
- **Claude 3.7 Sonnet**
105-
- **Claude 3.5 Sonnet**
106-
- **Claude 3.5 Haiku** (Fast responses)
82+
## Available Models
10783

108-
The specific models available depend on your Claude CLI subscription and plan.
84+
The Claude Code provider supports all Claude models available through the official CLI.
10985

86+
Model availability depends on your Claude CLI subscription and plan. See [Anthropic's CLI documentation](https://docs.anthropic.com/en/docs/claude-code/setup) for details.
11087

11188
---
11289

docs/providers/deepinfra.md

Lines changed: 4 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -31,20 +31,11 @@ DeepInfra provides cost-effective access to high-performance open-source models
3131

3232
---
3333

34-
## Supported Models
34+
## Available Models
3535

36-
Roo Code dynamically fetches available models from DeepInfra's API. The default model is:
36+
Roo Code automatically fetches all available models from DeepInfra's API.
3737

38-
* `Qwen/Qwen3-Coder-480B-A35B-Instruct-Turbo` (256K context, optimized for coding)
39-
40-
Common models available include:
41-
42-
* **Coding Models:** Qwen Coder series, specialized for programming tasks
43-
* **General Models:** Llama 3.1, Mixtral, and other open-source models
44-
* **Vision Models:** Models with image understanding capabilities
45-
* **Reasoning Models:** Models with advanced reasoning support
46-
47-
Browse the full catalog at [deepinfra.com/models](https://deepinfra.com/models).
38+
For the complete, up-to-date model catalog, see [DeepInfra's models page](https://deepinfra.com/models).
4839

4940
---
5041

@@ -53,38 +44,4 @@ Browse the full catalog at [deepinfra.com/models](https://deepinfra.com/models).
5344
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
5445
2. **Select Provider:** Choose "DeepInfra" from the "API Provider" dropdown.
5546
3. **Enter API Key:** Paste your DeepInfra API key into the "DeepInfra API Key" field.
56-
4. **Select Model:** Choose your desired model from the "Model" dropdown.
57-
- Models will auto-populate after entering a valid API key
58-
- Click "Refresh Models" to update the list
59-
60-
---
61-
62-
## Advanced Features
63-
64-
### Prompt Caching
65-
66-
DeepInfra supports prompt caching for eligible models, which:
67-
- Reduces costs for repeated contexts
68-
- Improves response times for similar queries
69-
- Automatically manages cache based on task IDs
70-
71-
### Vision Support
72-
73-
Models with vision capabilities can:
74-
- Process images alongside text
75-
- Understand visual content for coding tasks
76-
- Analyze screenshots and diagrams
77-
78-
### Custom Base URL
79-
80-
For enterprise deployments, you can configure a custom base URL in the advanced settings.
81-
82-
---
83-
84-
## Tips and Notes
85-
86-
* **Performance:** DeepInfra offers low latency with automatic load balancing across global locations.
87-
* **Cost Efficiency:** Competitive pricing with prompt caching to reduce costs for repeated contexts.
88-
* **Model Variety:** Access to the latest open-source models including specialized coding models.
89-
* **Context Windows:** Models support context windows up to 256K tokens for large codebases.
90-
* **Pricing:** Pay-per-use model with no minimums. Check [deepinfra.com](https://deepinfra.com/) for current pricing.
47+
4. **Select Model:** Choose your desired model from the "Model" dropdown.

docs/providers/deepseek.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,12 +30,11 @@ Roo Code supports accessing models through the DeepSeek API, including `deepseek
3030

3131
---
3232

33-
## Supported Models
33+
## Available Models
3434

35-
Roo Code supports the following DeepSeek models:
35+
Roo Code supports all models available through the DeepSeek API.
3636

37-
* `deepseek-chat` (Recommended for coding tasks)
38-
* `deepseek-reasoner` (Recommended for reasoning tasks)
37+
For the complete, up-to-date model list, see [DeepSeek's API documentation](https://api-docs.deepseek.com/quick_start/pricing).
3938

4039
---
4140

docs/providers/doubao.md

Lines changed: 3 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -30,19 +30,11 @@ Doubao is ByteDance's Chinese AI service, offering competitive language models f
3030

3131
---
3232

33-
## Supported Models
33+
## Available Models
3434

35-
Roo Code supports the following Doubao models:
35+
Roo Code supports all Doubao models available through ByteDance's Volcano Engine API.
3636

37-
* `doubao-seed-1-6-250615` (Default) - General purpose
38-
* `doubao-seed-1-6-thinking-250715` - Enhanced reasoning
39-
* `doubao-seed-1-6-flash-250715` - Speed optimized
40-
41-
All models support:
42-
- 128,000 token context window
43-
- 32,768 max output tokens
44-
- Image inputs
45-
- Prompt caching with 80% discount on cached reads
37+
For the complete, up-to-date model list, see [Volcano Engine's AI model service](https://www.volcengine.com/).
4638

4739
---
4840

docs/providers/featherless.md

Lines changed: 4 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -32,26 +32,11 @@ Featherless AI provides access to high-performance open-source models including
3232

3333
---
3434

35-
## Supported Models
35+
## Available Models
3636

37-
Roo Code supports the following Featherless models:
37+
Roo Code automatically fetches all available models from Featherless AI's API.
3838

39-
* `deepseek-ai/DeepSeek-R1-0528` (Default) - DeepSeek R1 reasoning model with `<think>` tag support
40-
* `deepseek-ai/DeepSeek-V3-0324` - DeepSeek V3 model
41-
* `moonshotai/Kimi-K2-Instruct` - Kimi K2 instruction-following model
42-
* `openai/gpt-oss-120b` - GPT-OSS 120B parameter model
43-
* `Qwen/Qwen3-Coder-480B-A35B-Instruct` - Qwen3 specialized coding model
44-
45-
### Model Capabilities
46-
47-
All models support:
48-
- **Context Window:** ~32,678 tokens
49-
- **Max Output:** 4,096 tokens
50-
- **Pricing:** Free (no cost for input/output tokens)
51-
52-
:::info
53-
**DeepSeek R1 Models:** The DeepSeek R1 models (like `DeepSeek-R1-0528`) include special reasoning capabilities with `<think>` tag support for step-by-step problem solving. These models automatically separate reasoning from regular output.
54-
:::
39+
For the complete, up-to-date model list, see [Featherless AI](https://featherless.ai).
5540

5641
---
5742

@@ -60,18 +45,4 @@ All models support:
6045
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
6146
2. **Select Provider:** Choose "Featherless AI" from the "API Provider" dropdown.
6247
3. **Enter API Key:** Paste your Featherless API key into the "Featherless API Key" field.
63-
4. **Select Model:** Choose your desired model from the "Model" dropdown.
64-
65-
---
66-
67-
## Tips and Notes
68-
69-
* **Free Tier:** All models are currently free with no usage costs, making Featherless ideal for experimentation and development.
70-
* **Model Selection:** Choose models based on your needs:
71-
- **DeepSeek R1 models:** Best for complex reasoning and problem-solving tasks
72-
- **DeepSeek V3:** General-purpose model for various tasks
73-
- **Qwen3 Coder:** Optimized for code generation and programming tasks
74-
- **Kimi K2:** Balanced instruction-following model
75-
- **GPT-OSS:** Large general-purpose model
76-
* **OpenAI Compatibility:** Featherless uses an OpenAI-compatible API format for easy integration.
77-
* **Limitations:** No image support or prompt caching available on any model.
48+
4. **Select Model:** Choose your desired model from the "Model" dropdown.

0 commit comments

Comments
 (0)