Commit b51d3f4
feat(snippet): update llama.cpp snippet to include windows (#1476)
This PR adds the Windows way of installing and running llama.cpp on such
OSes.
llama.cpp is now available on WinGet and as easy installable as brew on
Windows machines now.
```powershelll
PS C:\Users\momo-> winget install llama.cpp
PS C:\Users\momo-> # Load and run the model:
PS C:\Users\momo-> llama-cli -hf lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF:Q4_K_M
load_backend: loaded RPC backend from C:\Users\momo-\AppData\Local\Microsoft\WinGet\Packages\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe\ggml-rpc.dll
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: KHR_coopmat
[...]
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to the AI.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
- Not using system message. To change it, set a different value via -sys PROMPT
> Hey how are you?
I'm just a language model, I don't have feelings or emotions like humans do, but I'm functioning properly and ready to help with any questions or tasks you may have! How about you? How's your day going?
```
---------
Co-authored-by: Pedro Cuenca <[email protected]>1 parent 91003db commit b51d3f4
1 file changed
+5
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
121 | 121 | | |
122 | 122 | | |
123 | 123 | | |
| 124 | + | |
| 125 | + | |
| 126 | + | |
| 127 | + | |
| 128 | + | |
124 | 129 | | |
125 | 130 | | |
126 | 131 | | |
| |||
0 commit comments