You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -267,9 +267,9 @@ The [Hugging Face](https://huggingface.co) platform hosts a [number of LLMs](htt
267
267
268
268
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from Hugging Face by using this CLI argument: `-hf <user>/<model>[:quant]`
269
269
270
-
LLAMA.CPP has supported a environment variable `HF_ENDPOINT`, you can set this to change the downloading url:
271
-
- By default, HF_ENDPOINT=https://huggingface.co/
272
-
- To use ModelScope, you can change to HF_ENDPOINT=https://www.modelscope.cn/
270
+
LLAMA.CPP supports an environment variable `MODEL_ENDPOINT`, use this to change the downloading endpoint:
271
+
- By default, MODEL_ENDPOINT=https://huggingface.co/
272
+
- To use ModelScope, change to MODEL_ENDPOINT=https://www.modelscope.cn/
273
273
274
274
After downloading a model, use the CLI tools to run it locally - see below.
0 commit comments