Skip to content

Commit 4c2abb3

Browse files
committed
fix readme
1 parent f3d0220 commit 4c2abb3

File tree

1 file changed

+2
-4
lines changed

1 file changed

+2
-4
lines changed

README.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -265,11 +265,9 @@ The [Hugging Face](https://huggingface.co) platform hosts a [number of LLMs](htt
265265
- [Trending](https://huggingface.co/models?library=gguf&sort=trending)
266266
- [LLaMA](https://huggingface.co/models?sort=trending&search=llama+gguf)
267267

268-
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from Hugging Face by using this CLI argument: `-hf <user>/<model>[:quant]`
268+
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from [Hugging Face](https://huggingface.co/) or other model hosting sites, such as [ModelScope](https://modelscope.cn/), by using this CLI argument: `-hf <user>/<model>[:quant]`.
269269

270-
LLAMA.CPP supports an environment variable `MODEL_ENDPOINT`, use this to change the downloading endpoint:
271-
- By default, MODEL_ENDPOINT=https://huggingface.co/
272-
- To use ModelScope, change to MODEL_ENDPOINT=https://www.modelscope.cn/
270+
By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable `MODEL_ENDPOINT`. For example, you may opt to downloading model checkpoints from ModelScope by setting the environment variable to `MODEL_ENDPOINT=https://www.modelscope.cn/`.
273271

274272
After downloading a model, use the CLI tools to run it locally - see below.
275273

0 commit comments

Comments
 (0)