Skip to content

Commit 669399e

Browse files
authored
Improve references to LLM providers and routers in documentation (#71)
* Update Requesty page * Improve references to LLM providers and routers
1 parent 67dd4a4 commit 669399e

File tree

5 files changed

+41
-7
lines changed

5 files changed

+41
-7
lines changed

docs/advanced-usage/large-projects.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ The context window includes:
2525

2626
5. **Prioritize Recent History:** Roo Code automatically truncates older messages in the conversation history to stay within the context window. Be mindful of this, and re-include important context if needed.
2727

28-
6. **Use Prompt Caching (if available):** Some API providers like OpenRouter support "prompt caching". This caches your prompts for use in future tasks and helps reduce the cost and latency of requests.
28+
6. **Use Prompt Caching (if available):** Some API providers like Anthropic, OpenAI, OpenRouter and Requesty support "prompt caching". This caches your prompts for use in future tasks and helps reduce the cost and latency of requests.
2929

3030
## Example: Refactoring a Large File
3131

docs/faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Roo Code can help with a variety of coding tasks, including:
3434

3535
### Is Roo Code free to use?
3636

37-
The Roo Code extension itself is free and open-source. However, Roo Code relies on external API providers (like [Anthropic](providers/anthropic), [OpenAI](providers/openai), [OpenRouter](providers/openrouter), etc.) for its AI capabilities. These providers typically charge for API usage based on the number of tokens processed. You will need to create an account and obtain an API key from your chosen provider. See [Setting Up Your First AI Provider](getting-started/connecting-api-provider) for details.
37+
The Roo Code extension itself is free and open-source. However, Roo Code relies on external API providers (like [Anthropic](providers/anthropic), [OpenAI](providers/openai), [OpenRouter](providers/openrouter), [Requesty](providers/requesty), etc.) for its AI capabilities. These providers typically charge for API usage based on the number of tokens processed. You will need to create an account and obtain an API key from your chosen provider. See [Setting Up Your First AI Provider](getting-started/connecting-api-provider) for details.
3838

3939
### What are the risks of using Roo Code?
4040

docs/getting-started/connecting-api-provider.md

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,18 +14,38 @@ Choose one of these options and follow the instructions below to obtain an API k
1414

1515
## Getting Your API Key
1616

17-
### Option 1: OpenRouter
17+
### Option 1: Using an LLM router
18+
19+
LLM routers allow you to use multiple models via a single provider, simplifying cost and API key management.
20+
21+
#### OpenRouter
1822

1923
1. **Go to the OpenRouter website:** [https://openrouter.ai/](https://openrouter.ai/)
2024
2. **Sign in** with your Google or GitHub account.
2125
3. **Get an API Key:** Go to the [keys page](https://openrouter.ai/keys) and create a key. Copy the key.
2226

23-
### Option 2: Anthropic
27+
#### Requesty
28+
29+
1. **Go to the Requesty website:** [https://requesty.ai/](https://requesty.ai/)
30+
2. **Sign in** with your Google or email account
31+
3. **Get an API Key:** Go to the [API management page](https://app.requesty.ai/manage-api) and create a key. **Important:** Copy the key immediately, as you won't be able to see it again.
32+
33+
### Option 2: Using an LLM provider directly
34+
35+
If you prefer to use a specific LLM provider directly, that is always an option.
36+
37+
#### Anthropic
2438

2539
1. **Go to the Anthropic Console:** [https://console.anthropic.com/](https://console.anthropic.com/)
2640
2. **Sign up** for an account or log in.
2741
3. **Create an API Key:** Go to the API keys page (you may need to navigate through the dashboard) and create a new key. **Important:** Copy the key immediately, as you won't be able to see it again.
2842

43+
#### OpenAI
44+
45+
1. **Go to the OpenAI Console:** [https://platform.openai.com/](https://platform.openai.com/)
46+
2. **Sign up** for an account or log in.
47+
3. **Create an API Key:** Go to the API keys page (you may need to navigate through the dashboard) and create a new key. **Important:** Copy the key immediately, as you won't be able to see it again.
48+
2949
## Configuring Roo Code in VS Code
3050

3151
1. **Open the Roo Code Sidebar:** Click the Roo Code icon (<Codicon name="rocket" />) in the VS Code Activity Bar. You should see the welcome screen.

docs/providers/anthropic.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,4 @@ See [Anthropic's Model Documentation](https://docs.anthropic.com/en/docs/about-c
3939
* **Prompt Caching:** Claude 3 models support [prompt caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching), which can significantly reduce costs and latency for repeated prompts.
4040
* **Context Window:** Claude models have large context windows (200,000 tokens), allowing you to include a significant amount of code and context in your prompts.
4141
* **Pricing:** Refer to the [Anthropic Pricing](https://www.anthropic.com/pricing) page for the latest pricing information.
42-
* **Rate Limits:** Anthropic has strict rate limits based on [usage tiers](https://docs.anthropic.com/en/api/rate-limits#requirements-to-advance-tier). If you're repeatedly hitting rate limits, consider contacting Anthropic sales or accessing Claude through a different provider like [OpenRouter](/providers/openrouter).
42+
* **Rate Limits:** Anthropic has strict rate limits based on [usage tiers](https://docs.anthropic.com/en/api/rate-limits#requirements-to-advance-tier). If you're repeatedly hitting rate limits, consider contacting Anthropic sales or accessing Claude through a different provider like [OpenRouter](/providers/openrouter) or [Requesty](/providers/requesty).

docs/providers/requesty.md

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,14 @@ sidebar_label: Requesty
44

55
# Using Requesty With Roo Code
66

7-
Roo Code supports accessing models through the [Requesty](https://www.requesty.ai/) AI platform. Requesty provides a unified API for interacting with various large language models (LLMs), including those from Anthropic and OpenAI, and offers features for testing, deploying, and monitoring LLM applications. It's designed to simplify the process of integrating AI into applications.
7+
Roo Code supports accessing models through the [Requesty](https://www.requesty.ai/) AI platform. Requesty provides an easy and optimized API for interacting with 150+ large language models (LLMs).
88

99
**Website:** [https://www.requesty.ai/](https://www.requesty.ai/)
1010

1111
## Getting an API Key
1212

1313
1. **Sign Up/Sign In:** Go to the [Requesty website](https://www.requesty.ai/) and create an account or sign in.
14-
2. **Get API Key:** You can get an API key from the [API Management ](https://app.requesty.ai/manage-api) section of your Requesty dashboard.
14+
2. **Get API Key:** You can get an API key from the [API Management](https://app.requesty.ai/manage-api) section of your Requesty dashboard.
1515

1616
## Supported Models
1717

@@ -23,3 +23,17 @@ Requesty provides access to a wide range of models. Roo Code will automatically
2323
2. **Select Provider:** Choose "Requesty" from the "API Provider" dropdown.
2424
3. **Enter API Key:** Paste your Requesty API key into the "Requesty API Key" field.
2525
4. **Select Model:** Choose your desired model from the "Model" dropdown.
26+
27+
## Tips and Notes
28+
29+
- **Optimizations**: Requesty offers range of in-flight cost optimizations to lower your costs.
30+
- **Unified and simplified billing**: Unrestricted access to all providers and models, automatic balance top ups and more via a single [API key](https://app.requesty.ai/manage-api).
31+
- **Cost tracking**: Track cost per model, coding language, changed file, and more via the [Cost dashboard](https://app.requesty.ai/cost-management) or the [Requesty VS.code extension](https://marketplace.visualstudio.com/items?itemName=Requesty.requesty).
32+
- **Stats and logs**: See your [coding stats dashboard](https://app.requesty.ai/usage-stats) or go through your [LLM interaction logs](https://app.requesty.ai/logs).
33+
- **Fallback policies**: Keep your LLM working for you with fallback policies when providers are down.
34+
* **Prompt Caching:** Some providers support prompt caching. [Search models with caching](https://app.requesty.ai/router/list).
35+
36+
## Relevant resources
37+
38+
- [Requesty Youtube channel](https://www.youtube.com/@requestyAI):
39+
- [Requesty Discord](https://requesty.ai/discord)

0 commit comments

Comments
 (0)