We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 05e9d3e commit 583deffCopy full SHA for 583deff
examples/llava/README-gemma3.md
@@ -4,6 +4,26 @@
4
>
5
> This is very experimental, only used for demo purpose.
6
7
+## Quick started
8
+
9
+You can use pre-quantized model from [ggml-org](https://huggingface.co/ggml-org)'s Hugging Face account
10
11
+```bash
12
+# build
13
+cmake -B build
14
+cmake --build build --target llama-gemma3-cli
15
16
+# alternatively, install from brew (MacOS)
17
+brew install llama.cpp
18
19
+# run it
20
+llama-gemma3-cli -hf ggml-org/gemma-3-4b-it-GGUF
21
+llama-gemma3-cli -hf ggml-org/gemma-3-12b-it-GGUF
22
+llama-gemma3-cli -hf ggml-org/gemma-3-27b-it-GGUF
23
24
+# note: 1B model does not support vision
25
+```
26
27
## How to get mmproj.gguf?
28
29
```bash
0 commit comments