Skip to content

Commit 40fb654

Browse files
authored
Apply suggestions from code review
1 parent 8db2386 commit 40fb654

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/multimodal.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ llama-server -hf ggml-org/gemma-3-4b-it-GGUF --no-mmproj-offload
3333

3434
## Pre-quantized models
3535

36-
These are ready-to-use models, most of them come with `Q4_K_M` quantization by default. They can be found at the Hugging Face page of the ggml-org: https://huggingface.co/collections/ggml-org/gguf-vision-models-68244e01ff1f39e5bebeeedc
36+
These are ready-to-use models, most of them come with `Q4_K_M` quantization by default. They can be found at the Hugging Face page of the ggml-org: https://huggingface.co/collections/ggml-org/multimodal-ggufs-68244e01ff1f39e5bebeeedc
3737

3838
Replaces the `(tool_name)` with the name of binary you want to use. For example, `llama-mtmd-cli` or `llama-server`
3939

@@ -83,7 +83,7 @@ NOTE: some models may require large context window, for example: `-c 8192`
8383
(tool_name) -hf ggml-org/Llama-4-Scout-17B-16E-Instruct-GGUF
8484

8585
# Moondream2 20250414 version
86-
(tool_name) -hf Hahasb/moondream2-20250414-GGUF
86+
(tool_name) -hf ggml-org/moondream2-20250414-GGUF
8787

8888
```
8989

0 commit comments

Comments
 (0)