Skip to content

Commit 5f46e2d

Browse files
authored
Merge branch 'RooVetGit:main' into main
2 parents 7d091f4 + 77debe0 commit 5f46e2d

File tree

14 files changed

+316
-17
lines changed

14 files changed

+316
-17
lines changed

docs/features/mcp/overview.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,5 @@ This documentation is organized into several sections:
1818
* [**STDIO & SSE Transports**](/features/mcp/server-transports) - Detailed comparison of local (STDIO) and remote (SSE) transport mechanisms with deployment considerations for each approach.
1919

2020
* [**MCP vs API**](/features/mcp/mcp-vs-api) - Analysis of the fundamental distinction between MCP and REST APIs, explaining how they operate at different layers of abstraction for AI systems.
21+
22+
* [**Recommended MCP Servers**](/features/mcp/recommended-mcp-servers) - Curated list of tested and recommended MCP servers for Roo Code, including a setup guide for Context7.
Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
---
2+
title: Recommended MCP Servers
3+
sidebar_label: Recommended MCP Servers
4+
---
5+
6+
# Recommended MCP Servers
7+
8+
While Roo Code can connect to any Model Context Protocol (MCP) server that follows the specification, the community has already built several high-quality servers that work out-of-the-box. This page curates the servers we **actively recommend** and provides step-by-step setup instructions so you can get productive in minutes.
9+
10+
> We'll keep this list up-to-date. If you maintain a server you'd like us to consider, please open a pull-request.
11+
12+
---
13+
14+
## Context7
15+
16+
`Context7` is our first-choice general-purpose MCP server. It ships a collection of highly-requested tools, installs with a single command, and has excellent support across every major editor that speaks MCP.
17+
18+
### Why we recommend Context7
19+
20+
* **One-command install** – everything is bundled, no local build step.
21+
* **Cross-platform** – runs on macOS, Windows, Linux, or inside Docker.
22+
* **Actively maintained** – frequent updates from the Upstash team.
23+
* **Rich toolset** – database access, web-search, text utilities, and more.
24+
* **Open source** – released under the MIT licence.
25+
26+
---
27+
28+
## Installing Context7 in Roo Code
29+
30+
There are two common ways to register the server:
31+
32+
1. **Global configuration** – available in every workspace.
33+
2. **Project-level configuration** – checked into version control alongside your code.
34+
35+
We'll cover both below.
36+
37+
### 1. Global configuration
38+
39+
1. Open the Roo Code **MCP settings** panel by clicking the <Codicon name="server" /> icon.
40+
2. Click **Edit Global MCP**.
41+
3. Paste the JSON below inside the `mcpServers` object and save.
42+
43+
```json
44+
{
45+
"mcpServers": {
46+
"context7": {
47+
"command": "npx",
48+
"args": ["-y", "@upstash/context7-mcp@latest"]
49+
}
50+
}
51+
}
52+
```
53+
54+
**Windows (cmd.exe) variant**
55+
56+
```json
57+
{
58+
"mcpServers": {
59+
"context7": {
60+
"type": "stdio",
61+
"command": "cmd",
62+
"args": ["/c", "npx", "-y", "@upstash/context7-mcp@latest"]
63+
}
64+
}
65+
}
66+
```
67+
68+
Also on **Windows (cmd)** you may need to invoke `npx` through `cmd.exe`:
69+
70+
<img src="/img/recommended-mcp-servers/context7-global-setup-fixed.png" alt="Adding Context7 to the global MCP settings" width="600" />
71+
72+
### 2. Project-level configuration
73+
74+
If you prefer to commit the configuration to your repository, create a file called `.roo/mcp.json` at the project root and add the same snippet:
75+
76+
```json
77+
{
78+
"mcpServers": {
79+
"context7": {
80+
"command": "npx",
81+
"args": ["-y", "@upstash/context7-mcp@latest"]
82+
}
83+
}
84+
}
85+
```
86+
87+
**Windows (cmd.exe) variant**
88+
89+
```json
90+
{
91+
"mcpServers": {
92+
"context7": {
93+
"type": "stdio",
94+
"command": "cmd",
95+
"args": ["/c", "npx", "-y", "@upstash/context7-mcp@latest"]
96+
}
97+
}
98+
}
99+
```
100+
101+
<img src="/img/recommended-mcp-servers/context7-project-setup-fixed.png" alt="Adding Context7 to a project-level MCP file" width="600" />
102+
103+
> When both global and project files define a server with the same name, **the project configuration wins**.
104+
105+
---
106+
107+
## Verifying the installation
108+
109+
1. Make sure **Enable MCP Servers** is turned on in the MCP settings panel.
110+
2. You should now see **Context7** listed. Click the <Codicon name="activate" /> toggle to start it if it isn't already running.
111+
3. Roo Code will prompt you the first time a Context7 tool is invoked. Approve the request to continue.
112+
113+
<img src="/img/recommended-mcp-servers/context7-running-fixed.png" alt="Context7 running in Roo Code" width="400" />
114+
115+
---
116+
117+
## Next steps
118+
119+
* Browse the list of tools shipped with Context7 in the server pane.
120+
* Configure **Always allow** for the tools you use most to streamline your workflow.
121+
* Want to expose your own APIs? Check out the [MCP server creation guide](/features/mcp/using-mcp-in-roo#enabling-or-disabling-mcp-server-creation).
122+
123+
Looking for other servers? Watch this page – we'll add more recommendations soon!

docs/providers/litellm.md

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
---
2+
sidebar_label: LiteLLM
3+
---
4+
5+
# Using LiteLLM With Roo Code
6+
7+
LiteLLM is a versatile tool that provides a unified interface to over 100 Large Language Models (LLMs) by offering an OpenAI-compatible API. This allows you to run a local server that can proxy requests to various model providers or serve local models, all accessible through a consistent API endpoint.
8+
9+
**Website:** [https://litellm.ai/](https://litellm.ai/) (Main project) & [https://docs.litellm.ai/](https://docs.litellm.ai/) (Documentation)
10+
11+
## Key Benefits
12+
13+
* **Unified API:** Access a wide range of LLMs (from OpenAI, Anthropic, Cohere, HuggingFace, etc.) through a single, OpenAI-compatible API.
14+
* **Local Deployment:** Run your own LiteLLM server locally, giving you more control over model access and potentially reducing latency.
15+
* **Simplified Configuration:** Manage credentials and model configurations in one place (your LiteLLM server) and let Roo Code connect to it.
16+
* **Cost Management:** LiteLLM offers features for tracking costs across different models and providers.
17+
18+
## Setting Up Your LiteLLM Server
19+
20+
To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
21+
22+
1. **Installation:** Follow the official [LiteLLM installation guide](https://docs.litellm.ai/docs/proxy_server) to install LiteLLM and its dependencies.
23+
2. **Configuration:** Configure your LiteLLM server with the models you want to use. This typically involves setting API keys for the underlying providers (e.g., OpenAI, Anthropic) in your LiteLLM server's configuration.
24+
3. **Start the Server:** Run your LiteLLM server. By default, it usually starts on `http://localhost:4000`.
25+
* You can also configure an API key for your LiteLLM server itself for added security.
26+
27+
Refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions on server setup, model configuration, and advanced features.
28+
29+
## Configuration in Roo Code
30+
31+
Once your LiteLLM server is running:
32+
33+
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
34+
2. **Select Provider:** Choose "LiteLLM" from the "API Provider" dropdown.
35+
3. **Enter Base URL:**
36+
* Input the URL of your LiteLLM server.
37+
* Defaults to `http://localhost:4000` if left blank.
38+
4. **Enter API Key (Optional):**
39+
* If you've configured an API key for your LiteLLM server, enter it here.
40+
* If your LiteLLM server doesn't require an API key, Roo Code will use a default dummy key (`"dummy-key"`), which should work fine.
41+
5. **Select Model:**
42+
* Roo Code will attempt to fetch the list of available models from your LiteLLM server by querying the `${baseUrl}/v1/model/info` endpoint.
43+
* The models displayed in the dropdown are sourced from this endpoint.
44+
* If no model is selected, Roo Code defaults to `anthropic/claude-3-7-sonnet-20250219` (this is `litellmDefaultModelId`). Ensure this model (or your desired default) is configured and available on your LiteLLM server.
45+
46+
<img src="/img/litellm/litellm.png" alt="Roo Code LiteLLM Provider Settings" width="600" />
47+
48+
## How Roo Code Fetches and Interprets Model Information
49+
50+
When you configure the LiteLLM provider, Roo Code interacts with your LiteLLM server to get details about the available models:
51+
52+
* **Model Discovery:** Roo Code makes a GET request to `${baseUrl}/v1/model/info` on your LiteLLM server. If an API key is provided in Roo Code's settings, it's included in the `Authorization: Bearer ${apiKey}` header.
53+
* **Model Properties:** For each model reported by your LiteLLM server, Roo Code extracts and interprets the following:
54+
* `model_name`: The identifier for the model.
55+
* `maxTokens`: Maximum output tokens. Defaults to `8192` if not specified by LiteLLM.
56+
* `contextWindow`: Maximum context tokens. Defaults to `200000` if not specified by LiteLLM.
57+
* `supportsImages`: Determined from `model_info.supports_vision` provided by LiteLLM.
58+
* `supportsPromptCache`: Determined from `model_info.supports_prompt_caching` provided by LiteLLM.
59+
* `inputPrice` / `outputPrice`: Calculated from `model_info.input_cost_per_token` and `model_info.output_cost_per_token` from LiteLLM.
60+
* `supportsComputerUse`: This flag is set to `true` if the underlying model identifier (from `litellm_params.model`, e.g., `openrouter/anthropic/claude-3.5-sonnet`) matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details).
61+
62+
Roo Code uses default values for some of these properties if they are not explicitly provided by your LiteLLM server's `/model/info` endpoint for a given model. The defaults are:
63+
* `maxTokens`: 8192
64+
* `contextWindow`: 200,000
65+
* `supportsImages`: `true`
66+
* `supportsComputerUse`: `true` (for the default model ID)
67+
* `supportsPromptCache`: `true`
68+
* `inputPrice`: 3.0 (µUSD per 1k tokens)
69+
* `outputPrice`: 15.0 (µUSD per 1k tokens)
70+
71+
## Tips and Notes
72+
73+
* **LiteLLM Server is Key:** The primary configuration for models, API keys for downstream providers (like OpenAI, Anthropic), and other advanced features are managed on your LiteLLM server. Roo Code acts as a client to this server.
74+
* **Model Availability:** The models available in Roo Code's "Model" dropdown depend entirely on what your LiteLLM server exposes through its `/v1/model/info` endpoint.
75+
* **Network Accessibility:** Ensure your LiteLLM server is running and accessible from the machine where VS Code and Roo Code are running (e.g., check firewall rules if not on `localhost`).
76+
* **Troubleshooting:** If models aren't appearing or requests fail:
77+
* Verify your LiteLLM server is running and configured correctly.
78+
* Check the LiteLLM server logs for errors.
79+
* Ensure the Base URL in Roo Code settings matches your LiteLLM server's address.
80+
* Confirm any API key required by your LiteLLM server is correctly entered in Roo Code.
81+
* **Computer Use Models:** The `supportsComputerUse` flag in Roo Code is primarily relevant for certain Anthropic models known to perform well with tool-use and function-calling tasks. If you are routing other models through LiteLLM, this flag might not be automatically set unless the underlying model ID matches the specific Anthropic ones Roo Code recognizes.
82+
83+
By leveraging LiteLLM, you can significantly expand the range of models accessible to Roo Code while centralizing their management.

docs/tips-and-tricks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ A collection of quick tips to help you get the most out of Roo Code.
55
- Drag Roo Code to the [Secondary Sidebar](https://code.visualstudio.com/api/ux-guidelines/sidebars#secondary-sidebar) so you can see the Explorer, Search, Source Control, etc.
66
<img src="/img/right-column-roo.gif" alt="Put Roo on the Right Column" width="900" />
77
- Once you have Roo Code in a separate sidebar from the file explorer, you can drag files from the explorer into the chat window (and even multiple at once). Just make sure to hold down the shift key after you start dragging the files.
8-
- If you're not using [MCP](/features/mcp/overview), turn it off in the <Codicon name="notebook" /> Prompts tab to significantly cut down the size of the system prompt.
8+
- If you're not using [MCP](/features/mcp/overview), turn it off in the <Codicon name="server" /> MCP Servers tab to significantly cut down the size of the system prompt.
99
- To keep your [custom modes](/features/custom-modes) on track, limit the types of files that they're allowed to edit.
1010
- If you hit the dreaded `input length and max tokens exceed context limit` error, you can recover by deleting a message, rolling back to a previous checkpoint, or switching over to a model with a long context window like Gemini for a message.
1111
- In general, be thoughtful about your `Max Tokens` setting for thinking models. Every token you allocate to that takes away from space available to store conversation history. Consider only using high `Max Tokens` / `Max Thinking Tokens` settings with modes like Architect and Debug, and keeping Code mode at 16k max tokens or less.

docs/update-notes/index.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,10 @@ This section contains notes about recent updates to Roo Code, listed by version
44

55
## Version 3.16
66

7-
* [3.16](/update-notes/v3.16) (2025-05-06)
7+
* [3.16.3](/update-notes/v3.16.3) (2025-05-08)
8+
* [3.16.2](/update-notes/v3.16.2) (2025-05-07)
9+
* [3.16.1](/update-notes/v3.16.1) (2025-05-07)
10+
* [3.16](/update-notes/v3.16) (2025-05-08)
811

912
## Version 3.15
1013

docs/update-notes/v3.16.1.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Roo Code 3.16.1 Release Notes (2025-05-07)
2+
3+
This release introduces LiteLLM provider support for more AI backend options, improved stability by detecting and preventing tool execution loops, Dutch localization, enhanced telemetry by including the editor name, a UI migration to Tailwind CSS for better consistency (temporarily reverted in v3.16.3), a fix for responsive footer buttons, updated evaluation defaults, and the latest dependency versions for improved security and performance.
4+
5+
## New Provider: LiteLLM Integration
6+
We've introduced support for the [LiteLLM provider](/providers/litellm), simplifying access to a wide array of language models. This new integration offers:
7+
* **Automatic Model Discovery**: Roo Code automatically fetches and lists available models from your LiteLLM server. This means users no longer need to manually configure each LiteLLM model within Roo Code, streamlining setup and making it easier to switch between models served by LiteLLM.
8+
* **Simplified Access to 100+ LLMs**: Leverage LiteLLM's ability to provide a unified OpenAI-compatible API for various underlying models.
9+
10+
<img src="/img/litellm/litellm.png" alt="Roo Code LiteLLM Provider Settings" width="600" />
11+
12+
This new provider significantly improves the ease of using diverse models through LiteLLM. For more details on setting up LiteLLM, see the [LiteLLM provider documentation](/providers/litellm).
13+
14+
## Tool Loop Detection
15+
We've implemented a mechanism to detect and prevent tool execution loops, enhancing stability and user experience:
16+
* **Prevents Infinite Loops**: The system now identifies when a tool might be caught in a repetitive cycle and intelligently intervenes by prompting for user input.
17+
* **Improved Stability**: Reduces the risk of the application becoming unresponsive or stuck due to unintentional tool looping.
18+
19+
This ensures a smoother, more reliable, and frustration-free interaction with the extension's tools.
20+
21+
## QOL Improvements
22+
* **Dutch Localization Added**: Added Dutch language support, allowing Dutch-speaking users to use the extension in their native language for a more inclusive experience. (thanks Githubguy132010!)
23+
* **Tailwind CSS Migration**: Migrated the UI to Tailwind CSS for a more polished and cohesive interface. (Note: This was reverted in v3.16.3)
24+
* **Responsive Footer Buttons in About Section**: Fixed the layout of footer buttons in the About section, ensuring they wrap correctly on narrow screens for a better mobile experience and improved accessibility. (thanks ecmasx!)
25+
26+
## Misc Improvements
27+
* **Editor Name in Telemetry**: Added the editor name to telemetry data. This helps in understanding which editors are most used and enables more targeted improvements and support for different environments.
28+
* **Improved Evaluation Defaults and Setup**: Updated evaluation defaults and improved the setup process, making the evaluation environment easier and more reliable to configure with more practical out-of-the-box settings.
29+
* **Update Dependencies**: Updated dependencies to their latest versions for improved security and performance.

docs/update-notes/v3.16.2.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Roo Code 3.16.2 Release Notes (2025-05-07)
2+
3+
This release includes clearer XML tool use formatting instructions for easier understanding and improved error handling for a more robust experience.
4+
5+
## Tool Use Improvements
6+
* **Clarified XML Tool Formatting Instructions**: Documentation and prompts now provide clearer examples of how to format XML tool use, preventing the `<tool_name>` and other tool use errors.
7+
* This fix is largely targeted at issues faced with Gemini 2.5 when using tools
8+
9+
## Misc Improvements
10+
* **Improved Error Handling for Streaming**: Fixed an issue where the app could get stuck waiting for a response. The app now recovers gracefully from errors during streaming, reducing the likelihood of unresponsive behavior and improving reliability. (thanks monkeyDluffy6017!)

docs/update-notes/v3.16.3.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Roo Code 3.16.3 Release Notes (2025-05-08)
2+
3+
This release reverts the Tailwind CSS migration (temporarily) to restore UI stability and adds Elixir file extension support to the language parser for enhanced code analysis.
4+
5+
## Misc Improvements
6+
* **Revert Tailwind Migration**: Restored the previous user interface by reverting the Tailwind CSS migration. This returns the UI to a familiar and stable state, resolving issues introduced by the migration and ensuring users see the interface as expected without unexpected layout or style changes.
7+
* **Add Elixir File Support in Language Parser**: Added support for Elixir (`.ex`, `.exs`) files in the language parser. This expands language support, allowing users to work with Elixir code seamlessly and enabling better code analysis for improved productivity. (thanks pfitz!)

0 commit comments

Comments
 (0)