Skip to content

Commit 9a974c1

Browse files
committed
Add Docs for LiteLLM
1 parent 6074595 commit 9a974c1

File tree

1 file changed

+65
-0
lines changed

1 file changed

+65
-0
lines changed

docs/docs/ai/llm.mdx

Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -121,3 +121,68 @@ cocoindex.LlmSpec(
121121

122122
You can find the full list of models supported by Anthropic [here](https://docs.anthropic.com/en/docs/about-claude/models/all-models).
123123

124+
### LiteLLM
125+
126+
To use the LiteLLM API, you need to set the environment variable `LITELLM_API_KEY`.
127+
128+
#### 1. Install LiteLLM Proxy
129+
130+
```bash
131+
pip install 'litellm[proxy]'
132+
```
133+
134+
#### 2. Create a `config.yml` for LiteLLM
135+
136+
**Example for OpenAI:**
137+
```yaml
138+
model_list:
139+
- model_name: "*"
140+
litellm_params:
141+
model: openai/*
142+
api_key: os.environ/LITELLM_API_KEY
143+
```
144+
145+
**Example for DeepSeek:**
146+
147+
First, pull the DeepSeek model with Ollama:
148+
```bash
149+
ollama pull deepseek-r1
150+
```
151+
Then run it if it's not running:
152+
```bash
153+
ollama run deepseek-r1
154+
```
155+
156+
Then, use this in your `config.yml`:
157+
```yaml
158+
model_list:
159+
- model_name: "deepseek-r1"
160+
litellm_params:
161+
model: "ollama_chat/deepseek-r1"
162+
api_base: "http://localhost:11434"
163+
```
164+
165+
#### 3. Run LiteLLM Proxy
166+
167+
```bash
168+
litellm --config config.yml
169+
```
170+
171+
#### 4. A Spec for LiteLLM will look like this:
172+
173+
<Tabs>
174+
<TabItem value="python" label="Python" default>
175+
176+
```python
177+
cocoindex.LlmSpec(
178+
api_type=cocoindex.LlmApiType.LITELLM,
179+
model="deepseek-r1",
180+
address="http://127.0.0.1:4000", # default url of LiteLLM
181+
)
182+
```
183+
184+
</TabItem>
185+
</Tabs>
186+
187+
You can find the full list of models supported by LiteLLM [here](https://docs.litellm.ai/docs/providers).
188+

0 commit comments

Comments
 (0)