Skip to content

Commit af6b5b4

Browse files
authored
Update LiteLLM configuration for hosted_vllm provider (#1060)
even though vllm produces openai compatible endpoint, to make work you have to use provider as hosted_vllm and use a hosted_vllm prefix prior to model name
1 parent 391d5b4 commit af6b5b4

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/use-litellm-as-backend.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@ vllm serve HuggingFaceH4/zephyr-7b-beta --host 0.0.0.0 --port 8000
6363
2. Configure LiteLLM to use the local server:
6464
```yaml
6565
model_parameters:
66-
provider: "openai"
67-
model_name: "HuggingFaceH4/zephyr-7b-beta"
66+
provider: "hosted_vllm"
67+
model_name: "hosted_vllm/HuggingFaceH4/zephyr-7b-beta"
6868
base_url: "http://localhost:8000/v1"
6969
api_key: ""
7070
```

0 commit comments

Comments
 (0)