Skip to content

Commit 275d327

Browse files
Vaibhavs10osansevieropcuenca
authored
Apply suggestions from code review
Co-authored-by: Omar Sanseviero <[email protected]> Co-authored-by: Pedro Cuenca <[email protected]>
1 parent 78f71e8 commit 275d327

File tree

1 file changed

+5
-3
lines changed

1 file changed

+5
-3
lines changed

docs/hub/gguf-llamacpp.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,17 @@
33
> [!TIP]
44
> You can now deploy any llama.cpp compatible GGUF on Hugging Face Endpoints, read more about it [here](https://huggingface.co/docs/inference-endpoints/en/others/llamacpp_container)
55
6-
Llama.cpp allows you to download and run inference on a GGUF simply by providing a path to the Hugging Face repo path and the file name. llama.cpp downloads the model checkpoint and automatically caches it. The location of the cache is defined by `LLAMA_CACHE` environment variable, read more about it [here](https://github.com/ggerganov/llama.cpp/pull/7826).
6+
Llama.cpp allows you to download and run inference on a GGUF simply by providing a path to the Hugging Face repo path and the file name. llama.cpp downloads the model checkpoint and automatically caches it. The location of the cache is defined by `LLAMA_CACHE` environment variable; read more about it [here](https://github.com/ggerganov/llama.cpp/pull/7826).
77

8-
Install llama.cpp through brew (works on Mac and Linux)
8+
You can install llama.cpp through brew (works on Mac and Linux), or you can build it from source. There are also pre-built binaries and Docker images that you can [check in the official documentation](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage).
9+
10+
### Option 1: Install with brew
911

1012
```bash
1113
brew install llama.cpp
1214
```
1315

14-
You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
16+
### Option 2: build from source
1517

1618
Step 1: Clone llama.cpp from GitHub.
1719

0 commit comments

Comments
 (0)