Skip to content

Commit c79f32f

Browse files
SannidhyaSannidhya
authored andcommitted
Address PR #432 review: fix broken URL and remove unnecessary promotional content from provider docs
1 parent d4d542c commit c79f32f

28 files changed

+18
-260
lines changed

docs/providers/anthropic.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -38,11 +38,6 @@ Roo Code supports all Claude models available through Anthropic's API.
3838

3939
For the complete, up-to-date model list and capabilities, see [Anthropic's model documentation](https://docs.anthropic.com/en/docs/about-claude/models).
4040

41-
**Recommended for Roo Code:**
42-
- **Sonnet models** - Best balance of performance and cost for most coding tasks (default)
43-
- **Opus models** - Better for complex reasoning and large-scale refactoring
44-
- **Haiku models** - Faster and more cost-effective for simpler tasks
45-
4641
---
4742

4843
## Configuration in Roo Code

docs/providers/bedrock.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -55,13 +55,6 @@ For the complete, up-to-date model list with IDs and capabilities, see [AWS Bedr
5555
5656
**Important:** Use the *model ID* (e.g., `anthropic.claude-sonnet-4-5-20250929-v1:0`) when configuring Roo Code, not the model name.
5757
58-
**Recommended for Roo Code:**
59-
- **Claude Sonnet models** - Best balance for most coding tasks (default: `anthropic.claude-sonnet-4-5-20250929-v1:0`)
60-
- **Amazon Nova models** - Better for AWS-integrated workflows
61-
- **Meta Llama models** - Good for open-source requirements
62-
63-
**Note:** Model availability varies by AWS region. Request access to specific models through the Bedrock console before use.
64-
6558
---
6659
6760
## Configuration in Roo Code

docs/providers/cerebras.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -45,12 +45,7 @@ Cerebras AI specializes in extremely fast inference speeds (up to 2600 tokens/se
4545

4646
Roo Code automatically fetches all available models from Cerebras AI's API.
4747

48-
For the complete, up-to-date model list and pricing, see [Cerebras Cloud](https://cloud.cerebras. ai?utm_source=roocode).
49-
50-
**Key advantages:**
51-
- Ultra-fast inference (up to 2600 tokens/second)
52-
- Free tier available with rate limits
53-
- Context windows: 64K-128K tokens
48+
For the complete, up-to-date model list and pricing, see [Cerebras Cloud](https://cloud.cerebras.ai?utm_source=roocode).
5449

5550
---
5651

docs/providers/chutes.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,6 @@ Roo Code automatically fetches all available models from Chutes AI's API.
3333

3434
For the complete, up-to-date model list, see [Chutes AI's platform](https://chutes.ai/) or your account dashboard.
3535

36-
**Key advantage:** Free API access to multiple LLMs for experimentation and development.
37-
3836
---
3937

4038
## Configuration in Roo Code

docs/providers/claude-code.md

Lines changed: 0 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -42,21 +42,6 @@ If this environment variable is set on your system, the `claude` tool may use it
4242

4343
---
4444

45-
## Key Features
46-
- **Direct CLI Access**: Uses Anthropic's official Claude CLI tool for model interactions.
47-
- **Advanced Reasoning**: Full support for Claude's thinking mode and reasoning capabilities.
48-
- **Cost Transparency**: Shows exact usage costs as reported by the CLI.
49-
- **Flexible Configuration**: Works with your existing Claude CLI setup.
50-
51-
---
52-
53-
## Why Use This Provider
54-
55-
- **No API Keys**: Uses your existing Claude CLI authentication.
56-
- **Cost Benefits**: Leverage CLI subscription rates and transparent cost reporting.
57-
- **Latest Features**: Access new Claude capabilities as they're released in the CLI.
58-
- **Advanced Reasoning**: Full support for Claude's thinking modes.
59-
6045
## How it Works
6146

6247
The Claude Code provider works by:
@@ -100,12 +85,6 @@ The Claude Code provider supports all Claude models available through the offici
10085

10186
Model availability depends on your Claude CLI subscription and plan. See [Anthropic's CLI documentation](https://docs.anthropic.com/en/docs/claude-code/setup) for details.
10287

103-
**Recommended:**
104-
- **Sonnet models** - Best balance for most coding tasks (latest recommended)
105-
- **Opus models** - Better for complex reasoning
106-
- **Haiku models** - Faster responses when speed matters
107-
108-
10988
---
11089

11190
## Output Token Limits

docs/providers/deepinfra.md

Lines changed: 1 addition & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -37,52 +37,11 @@ Roo Code automatically fetches all available models from DeepInfra's API.
3737

3838
For the complete, up-to-date model catalog, see [DeepInfra's models page](https://deepinfra.com/models).
3939

40-
**Recommended for Roo Code:**
41-
- **Qwen Coder models** - Best for programming tasks with large context windows (default: `Qwen/Qwen3-Coder-480B-A35B-Instruct-Turbo`)
42-
- **Vision-capable models** - Better when you need image understanding
43-
- **Reasoning models** - Best for complex problem-solving tasks
44-
45-
**Key features:** Prompt caching support, low latency with global edge locations, competitive pricing.
46-
4740
---
4841

4942
## Configuration in Roo Code
5043

5144
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
5245
2. **Select Provider:** Choose "DeepInfra" from the "API Provider" dropdown.
5346
3. **Enter API Key:** Paste your DeepInfra API key into the "DeepInfra API Key" field.
54-
4. **Select Model:** Choose your desired model from the "Model" dropdown.
55-
- Models will auto-populate after entering a valid API key
56-
- Click "Refresh Models" to update the list
57-
58-
---
59-
60-
## Advanced Features
61-
62-
### Prompt Caching
63-
64-
DeepInfra supports prompt caching for eligible models, which:
65-
- Reduces costs for repeated contexts
66-
- Improves response times for similar queries
67-
- Automatically manages cache based on task IDs
68-
69-
### Vision Support
70-
71-
Models with vision capabilities can:
72-
- Process images alongside text
73-
- Understand visual content for coding tasks
74-
- Analyze screenshots and diagrams
75-
76-
### Custom Base URL
77-
78-
For enterprise deployments, you can configure a custom base URL in the advanced settings.
79-
80-
---
81-
82-
## Tips and Notes
83-
84-
* **Performance:** DeepInfra offers low latency with automatic load balancing across global locations.
85-
* **Cost Efficiency:** Competitive pricing with prompt caching to reduce costs for repeated contexts.
86-
* **Model Variety:** Access to the latest open-source models including specialized coding models.
87-
* **Context Windows:** Models support context windows up to 256K tokens for large codebases.
88-
* **Pricing:** Pay-per-use model with no minimums. Check [deepinfra.com](https://deepinfra.com/) for current pricing.
47+
4. **Select Model:** Choose your desired model from the "Model" dropdown.

docs/providers/deepseek.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,10 +36,6 @@ Roo Code supports all models available through the DeepSeek API.
3636

3737
For the complete, up-to-date model list, see [DeepSeek's API documentation](https://api-docs.deepseek.com/quick_start/pricing).
3838

39-
**Recommended for Roo Code:**
40-
- **`deepseek-chat`** - Best for general coding tasks
41-
- **`deepseek-reasoner`** - Better for complex reasoning and problem-solving
42-
4339
---
4440

4541
## Configuration in Roo Code

docs/providers/doubao.md

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -36,16 +36,6 @@ Roo Code supports all Doubao models available through ByteDance's Volcano Engine
3636

3737
For the complete, up-to-date model list, see [Volcano Engine's AI model service](https://www.volcengine.com/).
3838

39-
**Model features:**
40-
- 128K context window
41-
- Image input support
42-
- Prompt caching with 80% discount on cached reads
43-
44-
**Recommended:**
45-
- General purpose: Standard models for everyday tasks
46-
- Thinking models: Better for enhanced reasoning
47-
- Flash models: Faster for speed-optimized workflows
48-
4939
---
5040

5141
## Configuration in Roo Code

docs/providers/featherless.md

Lines changed: 1 addition & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -38,34 +38,11 @@ Roo Code automatically fetches all available models from Featherless AI's API.
3838

3939
For the complete, up-to-date model list, see [Featherless AI](https://featherless.ai).
4040

41-
**All models are currently FREE** with no usage costs.
42-
43-
**Recommended for Roo Code:**
44-
- **DeepSeek R1 models** - Best for complex reasoning with `<think>` tag support (default)
45-
- **Qwen3 Coder** - Better for specialized code generation tasks
46-
- **Kimi K2** - Good for balanced instruction-following
47-
48-
**Note:** Most models have ~32K context window and 4K max output. No image support or prompt caching available.
49-
5041
---
5142

5243
## Configuration in Roo Code
5344

5445
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
5546
2. **Select Provider:** Choose "Featherless AI" from the "API Provider" dropdown.
5647
3. **Enter API Key:** Paste your Featherless API key into the "Featherless API Key" field.
57-
4. **Select Model:** Choose your desired model from the "Model" dropdown.
58-
59-
---
60-
61-
## Tips and Notes
62-
63-
* **Free Tier:** All models are currently free with no usage costs, making Featherless ideal for experimentation and development.
64-
* **Model Selection:** Choose models based on your needs:
65-
- **DeepSeek R1 models:** Best for complex reasoning and problem-solving tasks
66-
- **DeepSeek V3:** General-purpose model for various tasks
67-
- **Qwen3 Coder:** Optimized for code generation and programming tasks
68-
- **Kimi K2:** Balanced instruction-following model
69-
- **GPT-OSS:** Large general-purpose model
70-
* **OpenAI Compatibility:** Featherless uses an OpenAI-compatible API format for easy integration.
71-
* **Limitations:** No image support or prompt caching available on any model.
48+
4. **Select Model:** Choose your desired model from the "Model" dropdown.

docs/providers/fireworks.md

Lines changed: 2 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -44,44 +44,17 @@ Roo Code supports all models available through Fireworks AI's platform.
4444

4545
For the complete, up-to-date model list and specifications, see [Fireworks AI's models page](https://fireworks.ai/models).
4646

47-
**Recommended for Roo Code:**
48-
- **Kimi K2** - Best for general-purpose coding with agentic capabilities (default)
49-
- **Qwen3 Coder** - Better for specialized code generation and debugging
50-
- **DeepSeek R1** - Best for complex reasoning and function calling tasks
51-
- **Qwen3 235B** - Most cost-effective for general development work
52-
5347
---
5448

5549
## Configuration in Roo Code
5650

5751
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
5852
2. **Select Provider:** Choose "Fireworks AI" from the "API Provider" dropdown.
5953
3. **Enter API Key:** Paste your Fireworks AI API key into the "Fireworks AI API Key" field.
60-
4. **Model Selection:** The default model (Kimi K2) is automatically selected. You can change it from the model dropdown if needed.
61-
62-
---
63-
64-
## Model Selection Guide
65-
66-
Choose models based on your needs:
67-
68-
| Model | Best For | Context | Price |
69-
|-------|----------|---------|-------|
70-
| **Kimi K2** | General tasks, balanced performance | 128K | Mid-range |
71-
| **Qwen3 235B** | Cost-effective general use | 256K | Budget-friendly |
72-
| **Qwen3 Coder** | Code generation and debugging | 256K | Mid-range |
73-
| **DeepSeek R1** | Complex reasoning, function calling | 160K | Premium |
74-
| **DeepSeek V3** | Strong general performance | 128K | Balanced |
54+
4. **Select Model:** Choose your desired model from the "Model" dropdown.
7555

7656
---
7757

7858
## Tips and Notes
7959

80-
* **Cost-Effective:** Fireworks AI offers significantly lower pricing than proprietary models while maintaining competitive performance.
81-
* **Large Context Windows:** Most models support 128K-256K tokens, suitable for processing large documents and maintaining extended conversations.
82-
* **OpenAI Compatibility:** The provider uses an OpenAI-compatible API format with streaming support and usage tracking.
83-
* **Rate Limits:** Fireworks AI has usage-based rate limits. Monitor your usage in the dashboard and consider upgrading your plan if needed.
84-
* **Text-Only:** All models are text-only without image support or prompt caching capabilities.
85-
* **Default Temperature:** Uses 0.5 temperature by default for balanced creativity and consistency.
86-
* **API Keys:** Stored locally on your machine for security.
87-
* **Pricing:** See the [Fireworks AI pricing page](https://fireworks.ai/pricing) for current rates. Prices shown are per million tokens.
60+
* **Pricing:** See the [Fireworks AI pricing page](https://fireworks.ai/pricing) for current rates.

0 commit comments

Comments
 (0)