You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/providers/litellm.md
+63-7Lines changed: 63 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,18 +23,63 @@ LiteLLM is a versatile tool that provides a unified interface to over 100 Large
23
23
24
24
To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
25
25
26
-
1.**Installation:** Follow the official [LiteLLM installation guide](https://docs.litellm.ai/docs/proxy_server) to install LiteLLM and its dependencies.
27
-
2.**Configuration:** Configure your LiteLLM server with the models you want to use. This typically involves setting API keys for the underlying providers (e.g., OpenAI, Anthropic) in your LiteLLM server's configuration.
28
-
3.**Start the Server:** Run your LiteLLM server. By default, it usually starts on `http://localhost:4000`.
29
-
* You can also configure an API key for your LiteLLM server itself for added security.
30
-
31
-
Refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions on server setup, model configuration, and advanced features.
26
+
### Installation
27
+
28
+
1. Install LiteLLM with proxy support:
29
+
```bash
30
+
pip install 'litellm[proxy]'
31
+
```
32
+
33
+
### Configuration
34
+
35
+
2. Create a configuration file (`config.yaml`) to define your models and providers:
36
+
```yaml
37
+
model_list:
38
+
# Configure Anthropic models
39
+
- model_name: claude-3-7-sonnet
40
+
litellm_params:
41
+
model: anthropic/claude-3-7-sonnet-20250219
42
+
api_key: os.environ/ANTHROPIC_API_KEY
43
+
44
+
# Configure OpenAI models
45
+
- model_name: gpt-4o
46
+
litellm_params:
47
+
model: openai/gpt-4o
48
+
api_key: os.environ/OPENAI_API_KEY
49
+
50
+
# Configure Azure OpenAI
51
+
- model_name: azure-gpt-4
52
+
litellm_params:
53
+
model: azure/my-deployment-name
54
+
api_base: https://your-resource.openai.azure.com/
55
+
api_version: "2023-05-15"
56
+
api_key: os.environ/AZURE_API_KEY
57
+
```
58
+
59
+
### Starting the Server
60
+
61
+
3. Start the LiteLLM proxy server:
62
+
```bash
63
+
# Using configuration file (recommended)
64
+
litellm --config config.yaml
65
+
66
+
# Or quick start with a single model
67
+
export ANTHROPIC_API_KEY=your-anthropic-key
68
+
litellm --model claude-3-7-sonnet-20250219
69
+
```
70
+
71
+
4. The proxy will run at `http://0.0.0.0:4000` by default (accessible as `http://localhost:4000`).
72
+
* You can also configure an API key for your LiteLLM server itself for added security.
73
+
74
+
Refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions on advanced server configuration and features.
32
75
33
76
---
34
77
35
78
## Configuration in Roo Code
36
79
37
-
Once your LiteLLM server is running:
80
+
Once your LiteLLM server is running, you have two options for configuring it in Roo Code:
81
+
82
+
### Option 1: Using the LiteLLM Provider (Recommended)
38
83
39
84
1.**Open Roo Code Settings:** Click the gear icon (<Codiconname="gear" />) in the Roo Code panel.
40
85
2.**Select Provider:** Choose "LiteLLM" from the "API Provider" dropdown.
@@ -50,6 +95,16 @@ Once your LiteLLM server is running:
50
95
* Use the refresh button to update the model list if you've added new models to your LiteLLM server.
51
96
* If no model is selected, Roo Code defaults to `anthropic/claude-3-7-sonnet-20250219` (this is `litellmDefaultModelId`). Ensure this model (or your desired default) is configured and available on your LiteLLM server.
52
97
98
+
### Option 2: Using OpenAI Compatible Provider
99
+
100
+
Alternatively, you can configure LiteLLM using the "OpenAI Compatible" provider:
101
+
102
+
1.**Open Roo Code Settings:** Click the gear icon (<Codiconname="gear" />) in the Roo Code panel.
103
+
2.**Select Provider:** Choose "OpenAI Compatible" from the "API Provider" dropdown.
104
+
3.**Enter Base URL:** Input your LiteLLM proxy URL (e.g., `http://localhost:4000`).
105
+
4.**Enter API Key:** Use any string as the API key (e.g., `"sk-1234"`) since LiteLLM handles the actual provider authentication.
106
+
5.**Select Model:** Choose the model name you configured in your `config.yaml` file.
@@ -82,6 +137,7 @@ Roo Code uses default values for some of these properties if they are not explic
82
137
## Tips and Notes
83
138
84
139
***LiteLLM Server is Key:** The primary configuration for models, API keys for downstream providers (like OpenAI, Anthropic), and other advanced features are managed on your LiteLLM server. Roo Code acts as a client to this server.
140
+
***Configuration Options:** You can use either the dedicated "LiteLLM" provider (recommended) for automatic model discovery, or the "OpenAI Compatible" provider for simple manual configuration.
85
141
***Model Availability:** The models available in Roo Code's "Model" dropdown depend entirely on what your LiteLLM server exposes through its `/v1/model/info` endpoint.
86
142
***Network Accessibility:** Ensure your LiteLLM server is running and accessible from the machine where VS Code and Roo Code are running (e.g., check firewall rules if not on `localhost`).
87
143
***Troubleshooting:** If models aren't appearing or requests fail:
0 commit comments