You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
16
16
17
17
## Hot topics
18
18
19
+
- A new binary `llama-mtmd-cli` is introduced to replace `llava-cli`, `minicpmv-cli` and `gemma3-cli`https://github.com/ggml-org/llama.cpp/pull/13012, `libllava` will be deprecated
19
20
-**How to use [MTLResidencySet](https://developer.apple.com/documentation/metal/mtlresidencyset?language=objc) to keep the GPU memory active?**https://github.com/ggml-org/llama.cpp/pull/11427
20
21
-**VS Code extension for FIM completions:**https://github.com/ggml-org/llama.vscode
21
22
- Universal [tool call support](./docs/function-calling.md) in `llama-server`https://github.com/ggml-org/llama.cpp/pull/9639
@@ -97,6 +98,7 @@ Instructions for adding support for new models: [HOWTO-add-model.md](docs/develo
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from Hugging Face by using this CLI argument: `-hf <user>/<model>[:quant]`
264
+
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from [Hugging Face](https://huggingface.co/) or other model hosting sites, such as [ModelScope](https://modelscope.cn/), by using this CLI argument: `-hf <user>/<model>[:quant]`.
265
+
266
+
By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable `MODEL_ENDPOINT`. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. `MODEL_ENDPOINT=https://www.modelscope.cn/`.
263
267
264
268
After downloading a model, use the CLI tools to run it locally - see below.
Copy file name to clipboardExpand all lines: SECURITY.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,8 @@ To protect sensitive data from potential leaks or unauthorized access, it is cru
40
40
### Untrusted environments or networks
41
41
42
42
If you can't run your models in a secure and isolated environment or if it must be exposed to an untrusted network, make sure to take the following security precautions:
43
-
* Confirm the hash of any downloaded artifact (e.g. pre-trained model weights) matches a known-good value
43
+
* Do not use the RPC backend, [rpc-server](https://github.com/ggml-org/llama.cpp/tree/master/examples/rpc) and [llama-server](https://github.com/ggml-org/llama.cpp/tree/master/examples/server) functionality (see https://github.com/ggml-org/llama.cpp/pull/13061).
44
+
* Confirm the hash of any downloaded artifact (e.g. pre-trained model weights) matches a known-good value.
44
45
* Encrypt your data if sending it over the network.
0 commit comments