You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
16
16
17
17
## Hot topics
18
18
19
+
- A new binary `llama-mtmd-cli` is introduced to replace `llava-cli`, `minicpmv-cli` and `gemma3-cli`https://github.com/ggml-org/llama.cpp/pull/13012, `libllava` will be deprecated
19
20
-**How to use [MTLResidencySet](https://developer.apple.com/documentation/metal/mtlresidencyset?language=objc) to keep the GPU memory active?**https://github.com/ggml-org/llama.cpp/pull/11427
20
21
-**VS Code extension for FIM completions:**https://github.com/ggml-org/llama.vscode
21
22
- Universal [tool call support](./docs/function-calling.md) in `llama-server`https://github.com/ggml-org/llama.cpp/pull/9639
Copy file name to clipboardExpand all lines: SECURITY.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,8 @@ To protect sensitive data from potential leaks or unauthorized access, it is cru
40
40
### Untrusted environments or networks
41
41
42
42
If you can't run your models in a secure and isolated environment or if it must be exposed to an untrusted network, make sure to take the following security precautions:
43
-
* Confirm the hash of any downloaded artifact (e.g. pre-trained model weights) matches a known-good value
43
+
* Do not use the RPC backend, [rpc-server](https://github.com/ggml-org/llama.cpp/tree/master/examples/rpc) and [llama-server](https://github.com/ggml-org/llama.cpp/tree/master/examples/server) functionality (see https://github.com/ggml-org/llama.cpp/pull/13061).
44
+
* Confirm the hash of any downloaded artifact (e.g. pre-trained model weights) matches a known-good value.
44
45
* Encrypt your data if sending it over the network.
"Hugging Face model repository; quant is optional, case-insensitive, default to Q4_K_M, or falls back to the first file in the repo if Q4_K_M doesn't exist.\n"
2422
+
"mmproj is also downloaded automatically if available. to disable, add --no-mmproj\n"
0 commit comments