From 7061e6667ddaf375e4f11b4dde0576093fc579e2 Mon Sep 17 00:00:00 2001 From: Vaibhavs10 Date: Thu, 22 May 2025 16:09:08 +0200 Subject: [PATCH] add winget install on llama.cpp --- docs/hub/gguf-llamacpp.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/hub/gguf-llamacpp.md b/docs/hub/gguf-llamacpp.md index 1c041d8ce..4555a2e27 100644 --- a/docs/hub/gguf-llamacpp.md +++ b/docs/hub/gguf-llamacpp.md @@ -7,12 +7,18 @@ Llama.cpp allows you to download and run inference on a GGUF simply by providing You can install llama.cpp through brew (works on Mac and Linux), or you can build it from source. There are also pre-built binaries and Docker images that you can [check in the official documentation](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage). - ### Option 1: Install with brew + ### Option 1: Install with brew/ winget ```bash brew install llama.cpp ``` +or, on windows via winget + +```bash +winget install llama.cpp +``` + ### Option 2: build from source Step 1: Clone llama.cpp from GitHub.