Skip to content

Commit f2ee3d4

Browse files
committed
Move LiteLLM documentation to proper provider file
- Enhanced docs/providers/litellm.md with comprehensive setup instructions - Added step-by-step installation and configuration examples - Included multiple provider configurations (Anthropic, OpenAI, Azure) - Added two configuration options (LiteLLM provider vs OpenAI Compatible) - Removed LiteLLM content from docs/getting-started/connecting-api-provider.md - Maintains clean separation of concerns in documentation structure
1 parent aee8a21 commit f2ee3d4

File tree

2 files changed

+63
-55
lines changed

2 files changed

+63
-55
lines changed

docs/getting-started/connecting-api-provider.md

Lines changed: 0 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -29,49 +29,6 @@ LLM routers let you access multiple AI models with one API key, simplifying cost
2929

3030
*OpenRouter dashboard with "Create key" button. Name your key and copy it after creation.*
3131

32-
#### LiteLLM
33-
34-
[LiteLLM](https://litellm.ai/) is an open-source LLM gateway that provides access to 100+ AI models through a unified OpenAI-compatible API. Set up a self-hosted proxy server to route requests to multiple providers through a single endpoint.
35-
36-
1. Install LiteLLM: `pip install 'litellm[proxy]'`
37-
38-
2. Create a configuration file (`config.yaml`) to define your models:
39-
```yaml
40-
model_list:
41-
# Configure multiple Anthropic models
42-
- model_name: claude-3-7-sonnet
43-
litellm_params:
44-
model: anthropic/claude-3-7-sonnet-20250219
45-
api_key: os.environ/ANTHROPIC_API_KEY
46-
47-
# Configure OpenAI models
48-
- model_name: gpt-4o
49-
litellm_params:
50-
model: openai/gpt-4o
51-
api_key: os.environ/OPENAI_API_KEY
52-
53-
# Configure Azure OpenAI
54-
- model_name: azure-gpt-4
55-
litellm_params:
56-
model: azure/my-deployment-name
57-
api_base: https://your-resource.openai.azure.com/
58-
api_version: "2023-05-15"
59-
api_key: os.environ/AZURE_API_KEY
60-
```
61-
62-
3. Start the LiteLLM proxy server:
63-
```bash
64-
# Using configuration file (recommended)
65-
litellm --config config.yaml
66-
67-
# Or quick start with a single model
68-
export ANTHROPIC_API_KEY=your-anthropic-key
69-
litellm --model claude-3-7-sonnet-20250219
70-
```
71-
72-
4. The proxy will run at `http://0.0.0.0:4000` by default
73-
74-
7532
#### Requesty
7633

7734
1. Go to [requesty.ai](https://requesty.ai/)
@@ -121,11 +78,6 @@ Once you have your API key:
12178
4. Select your model:
12279
- For **OpenRouter**: select `anthropic/claude-3.7-sonnet` ([model details](https://openrouter.ai/anthropic/claude-3.7-sonnet))
12380
- For **Anthropic**: select `claude-3-7-sonnet-20250219` ([model details](https://www.anthropic.com/pricing#anthropic-api))
124-
- For **LiteLLM**:
125-
- Set the API provider to "OpenAI Compatible"
126-
- Enter your proxy URL (e.g., `http://localhost:4000`)
127-
- Use any string as the API key (e.g., "sk-1234")
128-
- Select the model name you configured in your `config.yaml`
12981

13082
:::info Model Selection Advice
13183
We strongly recommend **Claude 3.7 Sonnet** for the best experience—it generally "just works" out of the box. Roo Code has been extensively optimized for this model's capabilities and instruction-following behavior.

docs/providers/litellm.md

Lines changed: 63 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,63 @@ LiteLLM is a versatile tool that provides a unified interface to over 100 Large
2323

2424
To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
2525

26-
1. **Installation:** Follow the official [LiteLLM installation guide](https://docs.litellm.ai/docs/proxy_server) to install LiteLLM and its dependencies.
27-
2. **Configuration:** Configure your LiteLLM server with the models you want to use. This typically involves setting API keys for the underlying providers (e.g., OpenAI, Anthropic) in your LiteLLM server's configuration.
28-
3. **Start the Server:** Run your LiteLLM server. By default, it usually starts on `http://localhost:4000`.
29-
* You can also configure an API key for your LiteLLM server itself for added security.
30-
31-
Refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions on server setup, model configuration, and advanced features.
26+
### Installation
27+
28+
1. Install LiteLLM with proxy support:
29+
```bash
30+
pip install 'litellm[proxy]'
31+
```
32+
33+
### Configuration
34+
35+
2. Create a configuration file (`config.yaml`) to define your models and providers:
36+
```yaml
37+
model_list:
38+
# Configure Anthropic models
39+
- model_name: claude-3-7-sonnet
40+
litellm_params:
41+
model: anthropic/claude-3-7-sonnet-20250219
42+
api_key: os.environ/ANTHROPIC_API_KEY
43+
44+
# Configure OpenAI models
45+
- model_name: gpt-4o
46+
litellm_params:
47+
model: openai/gpt-4o
48+
api_key: os.environ/OPENAI_API_KEY
49+
50+
# Configure Azure OpenAI
51+
- model_name: azure-gpt-4
52+
litellm_params:
53+
model: azure/my-deployment-name
54+
api_base: https://your-resource.openai.azure.com/
55+
api_version: "2023-05-15"
56+
api_key: os.environ/AZURE_API_KEY
57+
```
58+
59+
### Starting the Server
60+
61+
3. Start the LiteLLM proxy server:
62+
```bash
63+
# Using configuration file (recommended)
64+
litellm --config config.yaml
65+
66+
# Or quick start with a single model
67+
export ANTHROPIC_API_KEY=your-anthropic-key
68+
litellm --model claude-3-7-sonnet-20250219
69+
```
70+
71+
4. The proxy will run at `http://0.0.0.0:4000` by default (accessible as `http://localhost:4000`).
72+
* You can also configure an API key for your LiteLLM server itself for added security.
73+
74+
Refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions on advanced server configuration and features.
3275

3376
---
3477

3578
## Configuration in Roo Code
3679

37-
Once your LiteLLM server is running:
80+
Once your LiteLLM server is running, you have two options for configuring it in Roo Code:
81+
82+
### Option 1: Using the LiteLLM Provider (Recommended)
3883

3984
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
4085
2. **Select Provider:** Choose "LiteLLM" from the "API Provider" dropdown.
@@ -50,6 +95,16 @@ Once your LiteLLM server is running:
5095
* Use the refresh button to update the model list if you've added new models to your LiteLLM server.
5196
* If no model is selected, Roo Code defaults to `anthropic/claude-3-7-sonnet-20250219` (this is `litellmDefaultModelId`). Ensure this model (or your desired default) is configured and available on your LiteLLM server.
5297

98+
### Option 2: Using OpenAI Compatible Provider
99+
100+
Alternatively, you can configure LiteLLM using the "OpenAI Compatible" provider:
101+
102+
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
103+
2. **Select Provider:** Choose "OpenAI Compatible" from the "API Provider" dropdown.
104+
3. **Enter Base URL:** Input your LiteLLM proxy URL (e.g., `http://localhost:4000`).
105+
4. **Enter API Key:** Use any string as the API key (e.g., `"sk-1234"`) since LiteLLM handles the actual provider authentication.
106+
5. **Select Model:** Choose the model name you configured in your `config.yaml` file.
107+
53108
<img src="/img/litellm/litellm.png" alt="Roo Code LiteLLM Provider Settings" width="600" />
54109

55110
---
@@ -82,6 +137,7 @@ Roo Code uses default values for some of these properties if they are not explic
82137
## Tips and Notes
83138

84139
* **LiteLLM Server is Key:** The primary configuration for models, API keys for downstream providers (like OpenAI, Anthropic), and other advanced features are managed on your LiteLLM server. Roo Code acts as a client to this server.
140+
* **Configuration Options:** You can use either the dedicated "LiteLLM" provider (recommended) for automatic model discovery, or the "OpenAI Compatible" provider for simple manual configuration.
85141
* **Model Availability:** The models available in Roo Code's "Model" dropdown depend entirely on what your LiteLLM server exposes through its `/v1/model/info` endpoint.
86142
* **Network Accessibility:** Ensure your LiteLLM server is running and accessible from the machine where VS Code and Roo Code are running (e.g., check firewall rules if not on `localhost`).
87143
* **Troubleshooting:** If models aren't appearing or requests fail:

0 commit comments

Comments
 (0)