Skip to content

Commit bf5d4b3

Browse files
Add LiteLLM self-hosted proxy documentation (#246)
Co-authored-by: hannesrudolph <[email protected]>
1 parent f25c6f3 commit bf5d4b3

File tree

1 file changed

+63
-7
lines changed

1 file changed

+63
-7
lines changed

docs/providers/litellm.md

Lines changed: 63 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,63 @@ LiteLLM is a versatile tool that provides a unified interface to over 100 Large
2323

2424
To use LiteLLM with Roo Code, you first need to set up and run a LiteLLM server.
2525

26-
1. **Installation:** Follow the official [LiteLLM installation guide](https://docs.litellm.ai/docs/proxy_server) to install LiteLLM and its dependencies.
27-
2. **Configuration:** Configure your LiteLLM server with the models you want to use. This typically involves setting API keys for the underlying providers (e.g., OpenAI, Anthropic) in your LiteLLM server's configuration.
28-
3. **Start the Server:** Run your LiteLLM server. By default, it usually starts on `http://localhost:4000`.
29-
* You can also configure an API key for your LiteLLM server itself for added security.
30-
31-
Refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions on server setup, model configuration, and advanced features.
26+
### Installation
27+
28+
1. Install LiteLLM with proxy support:
29+
```bash
30+
pip install 'litellm[proxy]'
31+
```
32+
33+
### Configuration
34+
35+
2. Create a configuration file (`config.yaml`) to define your models and providers:
36+
```yaml
37+
model_list:
38+
# Configure Anthropic models
39+
- model_name: claude-3-7-sonnet
40+
litellm_params:
41+
model: anthropic/claude-3-7-sonnet-20250219
42+
api_key: os.environ/ANTHROPIC_API_KEY
43+
44+
# Configure OpenAI models
45+
- model_name: gpt-4o
46+
litellm_params:
47+
model: openai/gpt-4o
48+
api_key: os.environ/OPENAI_API_KEY
49+
50+
# Configure Azure OpenAI
51+
- model_name: azure-gpt-4
52+
litellm_params:
53+
model: azure/my-deployment-name
54+
api_base: https://your-resource.openai.azure.com/
55+
api_version: "2023-05-15"
56+
api_key: os.environ/AZURE_API_KEY
57+
```
58+
59+
### Starting the Server
60+
61+
3. Start the LiteLLM proxy server:
62+
```bash
63+
# Using configuration file (recommended)
64+
litellm --config config.yaml
65+
66+
# Or quick start with a single model
67+
export ANTHROPIC_API_KEY=your-anthropic-key
68+
litellm --model claude-3-7-sonnet-20250219
69+
```
70+
71+
4. The proxy will run at `http://0.0.0.0:4000` by default (accessible as `http://localhost:4000`).
72+
* You can also configure an API key for your LiteLLM server itself for added security.
73+
74+
Refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions on advanced server configuration and features.
3275

3376
---
3477

3578
## Configuration in Roo Code
3679

37-
Once your LiteLLM server is running:
80+
Once your LiteLLM server is running, you have two options for configuring it in Roo Code:
81+
82+
### Option 1: Using the LiteLLM Provider (Recommended)
3883

3984
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
4085
2. **Select Provider:** Choose "LiteLLM" from the "API Provider" dropdown.
@@ -50,6 +95,16 @@ Once your LiteLLM server is running:
5095
* Use the refresh button to update the model list if you've added new models to your LiteLLM server.
5196
* If no model is selected, Roo Code defaults to `anthropic/claude-3-7-sonnet-20250219` (this is `litellmDefaultModelId`). Ensure this model (or your desired default) is configured and available on your LiteLLM server.
5297

98+
### Option 2: Using OpenAI Compatible Provider
99+
100+
Alternatively, you can configure LiteLLM using the "OpenAI Compatible" provider:
101+
102+
1. **Open Roo Code Settings:** Click the gear icon (<Codicon name="gear" />) in the Roo Code panel.
103+
2. **Select Provider:** Choose "OpenAI Compatible" from the "API Provider" dropdown.
104+
3. **Enter Base URL:** Input your LiteLLM proxy URL (e.g., `http://localhost:4000`).
105+
4. **Enter API Key:** Use any string as the API key (e.g., `"sk-1234"`) since LiteLLM handles the actual provider authentication.
106+
5. **Select Model:** Choose the model name you configured in your `config.yaml` file.
107+
53108
<img src="/img/litellm/litellm.png" alt="Roo Code LiteLLM Provider Settings" width="600" />
54109

55110
---
@@ -82,6 +137,7 @@ Roo Code uses default values for some of these properties if they are not explic
82137
## Tips and Notes
83138

84139
* **LiteLLM Server is Key:** The primary configuration for models, API keys for downstream providers (like OpenAI, Anthropic), and other advanced features are managed on your LiteLLM server. Roo Code acts as a client to this server.
140+
* **Configuration Options:** You can use either the dedicated "LiteLLM" provider (recommended) for automatic model discovery, or the "OpenAI Compatible" provider for simple manual configuration.
85141
* **Model Availability:** The models available in Roo Code's "Model" dropdown depend entirely on what your LiteLLM server exposes through its `/v1/model/info` endpoint.
86142
* **Network Accessibility:** Ensure your LiteLLM server is running and accessible from the machine where VS Code and Roo Code are running (e.g., check firewall rules if not on `localhost`).
87143
* **Troubleshooting:** If models aren't appearing or requests fail:

0 commit comments

Comments
 (0)