Skip to content

Commit 65e0909

Browse files
authored
[doc] add download model tips (#16389)
Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]>
1 parent c70cf0f commit 65e0909

File tree

1 file changed

+29
-0
lines changed

1 file changed

+29
-0
lines changed

docs/source/models/supported_models.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,35 @@ If vLLM successfully returns text (for generative models) or hidden states (for
160160
Otherwise, please refer to [Adding a New Model](#new-model) for instructions on how to implement your model in vLLM.
161161
Alternatively, you can [open an issue on GitHub](https://github.com/vllm-project/vllm/issues/new/choose) to request vLLM support.
162162

163+
#### Using a proxy
164+
165+
Here are some tips for loading/downloading models from Hugging Face using a proxy:
166+
167+
- Set the proxy globally for your session (or set it in the profile file):
168+
169+
```shell
170+
export http_proxy=http://your.proxy.server:port
171+
export https_proxy=http://your.proxy.server:port
172+
```
173+
174+
- Set the proxy for just the current command:
175+
176+
```shell
177+
https_proxy=http://your.proxy.server:port huggingface-cli download <model_name>
178+
179+
# or use vllm cmd directly
180+
https_proxy=http://your.proxy.server:port vllm serve <model_name> --disable-log-requests
181+
```
182+
183+
- Set the proxy in Python interpreter:
184+
185+
```python
186+
import os
187+
188+
os.environ['http_proxy'] = 'http://your.proxy.server:port'
189+
os.environ['https_proxy'] = 'http://your.proxy.server:port'
190+
```
191+
163192
### ModelScope
164193

165194
To use models from [ModelScope](https://www.modelscope.cn) instead of Hugging Face Hub, set an environment variable:

0 commit comments

Comments
 (0)