@@ -233,35 +233,35 @@ pip install 'litellm[proxy]'
233233
234234#### 2. Create a ` config.yml ` for LiteLLM
235235
236- ** Example for OpenAI:**
236+ ** Example for DeepSeek:**
237+
238+ Use this in your ` config.yml ` :
239+
237240``` yaml
238241model_list :
239- - model_name : " * "
242+ - model_name : deepseek-chat
240243 litellm_params :
241- model : openai/*
242- api_key : os.environ/LITELLM_API_KEY
244+ model : deepseek/deepseek-chat
245+ api_key : os.environ/DEEPSEEK_API_KEY
243246` ` `
244247
245- **Example for DeepSeek:**
248+ You need to set the environment variable ` DEEPSEEK_API_KEY` to your DeepSeek API key.
246249
247- First, pull the DeepSeek model with Ollama:
248- ` ` ` bash
249- ollama pull deepseek-r1
250- ```
251- Then run it if it's not running:
252- ``` bash
253- ollama run deepseek-r1
254- ```
250+ **Example for Groq:**
251+
252+ Use this in your `config.yml` :
255253
256- Then, use this in your ` config.yml ` :
257254` ` ` yaml
258255model_list:
259- - model_name : " deepseek-r1 "
256+ - model_name: groq-llama-3.3-70b-versatile
260257 litellm_params:
261- model : " ollama_chat/deepseek-r1 "
262- api_base : " http://localhost:11434 "
258+ model: groq/llama-3.3-70b-versatile
259+ api_key : "os.environ/GROQ_API_KEY "
263260` ` `
264261
262+ You need to set the environment variable `GROQ_API_KEY` to your Groq API key.
263+
264+
265265# ### 3. Run LiteLLM Proxy
266266
267267` ` ` bash
@@ -276,7 +276,7 @@ litellm --config config.yml
276276` ` ` python
277277cocoindex.LlmSpec(
278278 api_type=cocoindex.LlmApiType.LITE_LLM,
279- model = " deepseek-r1 " ,
279+ model="deepseek-chat ",
280280 address="http://127.0.0.1:4000", # default url of LiteLLM
281281)
282282` ` `
0 commit comments