Skip to content

Commit 952a47f

Browse files
authored
mtmd : support MiniCPM-V 4.0 (#14983)
* support minicpm-v 4 * add md * support MiniCPM-o 4.0 * add default location * temp rm MiniCPM-o 4.0 * fix code * fix "minicpmv_projector" default path
1 parent 36e5fe7 commit 952a47f

File tree

8 files changed

+145
-15
lines changed

8 files changed

+145
-15
lines changed

docs/multimodal/minicpmo2.6.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,8 @@ cmake --build build --config Release
2929
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) by us)
3030

3131
```bash
32-
python ./tools/mtmd/minicpmv-surgery.py -m ../MiniCPM-o-2_6
33-
python ./tools/mtmd/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-o-2_6 --minicpmv-projector ../MiniCPM-o-2_6/minicpmv.projector --output-dir ../MiniCPM-o-2_6/ --image-mean 0.5 0.5 0.5 --image-std 0.5 0.5 0.5 --minicpmv_version 4
32+
python ./tools/mtmd/legacy-models/minicpmv-surgery.py -m ../MiniCPM-o-2_6
33+
python ./tools/mtmd/legacy-models/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-o-2_6 --minicpmv-projector ../MiniCPM-o-2_6/minicpmv.projector --output-dir ../MiniCPM-o-2_6/ --minicpmv_version 4
3434
python ./convert_hf_to_gguf.py ../MiniCPM-o-2_6/model
3535

3636
# quantize int4 version

docs/multimodal/minicpmo4.0.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
## MiniCPM-o 4
2+
3+
### Prepare models and code
4+
5+
Download [MiniCPM-o-4](https://huggingface.co/openbmb/MiniCPM-o-4) PyTorch model from huggingface to "MiniCPM-o-4" folder.
6+
7+
8+
### Build llama.cpp
9+
Readme modification time: 20250206
10+
11+
If there are differences in usage, please refer to the official build [documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
12+
13+
Clone llama.cpp:
14+
```bash
15+
git clone https://github.com/ggerganov/llama.cpp
16+
cd llama.cpp
17+
```
18+
19+
Build llama.cpp using `CMake`:
20+
```bash
21+
cmake -B build
22+
cmake --build build --config Release
23+
```
24+
25+
26+
### Usage of MiniCPM-o 4
27+
28+
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-o-4-gguf) by us)
29+
30+
```bash
31+
python ./tools/mtmd/legacy-models/minicpmv-surgery.py -m ../MiniCPM-o-4
32+
python ./tools/mtmd/legacy-models/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-o-4 --minicpmv-projector ../MiniCPM-o-4/minicpmv.projector --output-dir ../MiniCPM-o-4/ --minicpmv_version 6
33+
python ./convert_hf_to_gguf.py ../MiniCPM-o-4/model
34+
35+
# quantize int4 version
36+
./build/bin/llama-quantize ../MiniCPM-o-4/model/ggml-model-f16.gguf ../MiniCPM-o-4/model/ggml-model-Q4_K_M.gguf Q4_K_M
37+
```
38+
39+
40+
Inference on Linux or Mac
41+
```bash
42+
# run in single-turn mode
43+
./build/bin/llama-mtmd-cli -m ../MiniCPM-o-4/model/ggml-model-f16.gguf --mmproj ../MiniCPM-o-4/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -p "What is in the image?"
44+
45+
# run in conversation mode
46+
./build/bin/llama-mtmd-cli -m ../MiniCPM-o-4/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-o-4/mmproj-model-f16.gguf
47+
```

docs/multimodal/minicpmv2.5.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ cmake --build build --config Release
2828
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) by us)
2929

3030
```bash
31-
python ./tools/mtmd/minicpmv-surgery.py -m ../MiniCPM-Llama3-V-2_5
32-
python ./tools/mtmd/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-Llama3-V-2_5 --minicpmv-projector ../MiniCPM-Llama3-V-2_5/minicpmv.projector --output-dir ../MiniCPM-Llama3-V-2_5/ --image-mean 0.5 0.5 0.5 --image-std 0.5 0.5 0.5 --minicpmv_version 2
31+
python ./tools/mtmd/legacy-models/minicpmv-surgery.py -m ../MiniCPM-Llama3-V-2_5
32+
python ./tools/mtmd/legacy-models/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-Llama3-V-2_5 --minicpmv-projector ../MiniCPM-Llama3-V-2_5/minicpmv.projector --output-dir ../MiniCPM-Llama3-V-2_5/ --minicpmv_version 2
3333
python ./convert_hf_to_gguf.py ../MiniCPM-Llama3-V-2_5/model
3434

3535
# quantize int4 version

docs/multimodal/minicpmv2.6.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ cmake --build build --config Release
2828
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) by us)
2929

3030
```bash
31-
python ./tools/mtmd/minicpmv-surgery.py -m ../MiniCPM-V-2_6
32-
python ./tools/mtmd/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-V-2_6 --minicpmv-projector ../MiniCPM-V-2_6/minicpmv.projector --output-dir ../MiniCPM-V-2_6/ --image-mean 0.5 0.5 0.5 --image-std 0.5 0.5 0.5 --minicpmv_version 3
31+
python ./tools/mtmd/legacy-models/minicpmv-surgery.py -m ../MiniCPM-V-2_6
32+
python ./tools/mtmd/legacy-models/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-V-2_6 --minicpmv-projector ../MiniCPM-V-2_6/minicpmv.projector --output-dir ../MiniCPM-V-2_6/ --minicpmv_version 3
3333
python ./convert_hf_to_gguf.py ../MiniCPM-V-2_6/model
3434

3535
# quantize int4 version

docs/multimodal/minicpmv4.0.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
## MiniCPM-V 4
2+
3+
### Prepare models and code
4+
5+
Download [MiniCPM-V-4](https://huggingface.co/openbmb/MiniCPM-V-4) PyTorch model from huggingface to "MiniCPM-V-4" folder.
6+
7+
8+
### Build llama.cpp
9+
Readme modification time: 20250206
10+
11+
If there are differences in usage, please refer to the official build [documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
12+
13+
Clone llama.cpp:
14+
```bash
15+
git clone https://github.com/ggerganov/llama.cpp
16+
cd llama.cpp
17+
```
18+
19+
Build llama.cpp using `CMake`:
20+
```bash
21+
cmake -B build
22+
cmake --build build --config Release
23+
```
24+
25+
26+
### Usage of MiniCPM-V 4
27+
28+
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-V-4-gguf) by us)
29+
30+
```bash
31+
python ./tools/mtmd/legacy-models/minicpmv-surgery.py -m ../MiniCPM-V-4
32+
python ./tools/mtmd/legacy-models/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-V-4 --minicpmv-projector ../MiniCPM-V-4/minicpmv.projector --output-dir ../MiniCPM-V-4/ --minicpmv_version 5
33+
python ./convert_hf_to_gguf.py ../MiniCPM-V-4/model
34+
35+
# quantize int4 version
36+
./build/bin/llama-quantize ../MiniCPM-V-4/model/ggml-model-f16.gguf ../MiniCPM-V-4/model/ggml-model-Q4_K_M.gguf Q4_K_M
37+
```
38+
39+
40+
Inference on Linux or Mac
41+
```bash
42+
# run in single-turn mode
43+
./build/bin/llama-mtmd-cli -m ../MiniCPM-V-4/model/ggml-model-f16.gguf --mmproj ../MiniCPM-V-4/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -p "What is in the image?"
44+
45+
# run in conversation mode
46+
./build/bin/llama-mtmd-cli -m ../MiniCPM-V-4/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-V-4/mmproj-model-f16.gguf
47+
```

tools/mtmd/clip.cpp

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -868,10 +868,16 @@ struct clip_graph {
868868
int n_head = n_embd/d_head;
869869
int num_query = 96;
870870
if (ctx->model.hparams.minicpmv_version == 2) {
871+
// MiniCPM-V 2.5
871872
num_query = 96;
872873
} else if (ctx->model.hparams.minicpmv_version == 3) {
874+
// MiniCPM-V 2.6
873875
num_query = 64;
874876
} else if (ctx->model.hparams.minicpmv_version == 4) {
877+
// MiniCPM-o 2.6
878+
num_query = 64;
879+
} else if (ctx->model.hparams.minicpmv_version == 5) {
880+
// MiniCPM-V 4.0
875881
num_query = 64;
876882
}
877883

@@ -3551,10 +3557,16 @@ int clip_n_output_tokens(const struct clip_ctx * ctx, struct clip_image_f32 * im
35513557
case PROJECTOR_TYPE_MINICPMV:
35523558
{
35533559
if (params.minicpmv_version == 2) {
3560+
// MiniCPM-V 2.5
35543561
n_patches_sq = 96;
35553562
} else if (params.minicpmv_version == 3) {
3563+
// MiniCPM-V 2.6
35563564
n_patches_sq = 64;
35573565
} else if (params.minicpmv_version == 4) {
3566+
// MiniCPM-o 2.6
3567+
n_patches_sq = 64;
3568+
} else if (params.minicpmv_version == 5) {
3569+
// MiniCPM-V 4.0
35583570
n_patches_sq = 64;
35593571
} else {
35603572
GGML_ABORT("Unknown minicpmv version");
@@ -4103,11 +4115,17 @@ int clip_n_mmproj_embd(const struct clip_ctx * ctx) {
41034115
return ctx->model.mm_3_b->ne[0];
41044116
case PROJECTOR_TYPE_MINICPMV:
41054117
if (hparams.minicpmv_version == 2) {
4118+
// MiniCPM-V 2.5
41064119
return 4096;
41074120
} else if (hparams.minicpmv_version == 3) {
4121+
// MiniCPM-V 2.6
41084122
return 3584;
41094123
} else if (hparams.minicpmv_version == 4) {
4124+
// MiniCPM-o 2.6
41104125
return 3584;
4126+
} else if (hparams.minicpmv_version == 5) {
4127+
// MiniCPM-V 4.0
4128+
return 2560;
41114129
}
41124130
GGML_ABORT("Unknown minicpmv version");
41134131
case PROJECTOR_TYPE_GLM_EDGE:

tools/mtmd/legacy-models/minicpmv-convert-image-encoder-to-gguf.py

Lines changed: 26 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -497,11 +497,11 @@ def bytes_to_unicode():
497497
ap.add_argument("-o", "--output-dir", help="Directory to save GGUF files. Default is the original model directory", default=None)
498498
# Example --image_mean 0.48145466 0.4578275 0.40821073 --image_std 0.26862954 0.26130258 0.27577711
499499
# Example --image_mean 0.5 0.5 0.5 --image_std 0.5 0.5 0.5
500-
default_image_mean = [0.48145466, 0.4578275, 0.40821073]
501-
default_image_std = [0.26862954, 0.26130258, 0.27577711]
500+
default_image_mean = [0.5, 0.5, 0.5]
501+
default_image_std = [0.5, 0.5, 0.5]
502502
ap.add_argument('--image-mean', type=float, nargs='+', help='Mean of the images for normalization (overrides processor) ', default=None)
503503
ap.add_argument('--image-std', type=float, nargs='+', help='Standard deviation of the images for normalization (overrides processor)', default=None)
504-
ap.add_argument('--minicpmv_version', type=int, help='minicpmv_version: MiniCPM-V-2 use 1; MiniCPM-V-2.5 use 2; MiniCPM-V-2.6 use 3; MiniCPM-o-2.6 use 4', default=2)
504+
ap.add_argument('--minicpmv_version', type=int, help='minicpmv_version: MiniCPM-V-2 use 1; MiniCPM-V-2.5 use 2; MiniCPM-V-2.6 use 3; MiniCPM-o-2.6 use 4; MiniCPM-V 4.0 use 5; MiniCPM-o-4.0 use 6', default=2)
505505

506506
# with proper
507507
args = ap.parse_args()
@@ -517,6 +517,17 @@ def bytes_to_unicode():
517517
# output in the same directory as the model if output_dir is None
518518
dir_model = args.model_dir
519519

520+
# If minicpmv_projector is not specified but the default path exists, use the default path
521+
if args.minicpmv_projector is None:
522+
default_projector_path = os.path.join(dir_model, "minicpmv.projector")
523+
if os.path.isfile(default_projector_path):
524+
args.minicpmv_projector = default_projector_path
525+
print(f"Found default projector file: {default_projector_path}")
526+
527+
# If output_dir is not specified, use model_dir as the default value
528+
if args.output_dir is None:
529+
args.output_dir = dir_model
530+
520531
if args.clip_model_is_vision or not os.path.exists(dir_model + "/vocab.json") or args.clip_model_is_openclip:
521532
vocab = None
522533
tokens = None
@@ -546,18 +557,21 @@ def bytes_to_unicode():
546557
minicpmv_version = args.minicpmv_version
547558
emb_dim = 4096
548559
block_count = 26
549-
if minicpmv_version == 1:
560+
if minicpmv_version == 1: # MiniCPM-V 2.0
550561
emb_dim = 2304
551562
block_count = 26
552-
elif minicpmv_version == 2:
563+
elif minicpmv_version == 2: # MiniCPM-V 2.5
553564
emb_dim = 4096
554565
block_count = 27
555-
elif minicpmv_version == 3:
566+
elif minicpmv_version == 3: # MiniCPM-V 2.6
556567
emb_dim = 3584
557568
block_count = 27
558-
elif minicpmv_version == 4:
569+
elif minicpmv_version == 4: # MiniCPM-o 2.6
559570
emb_dim = 3584
560571
block_count = 27
572+
elif minicpmv_version == 5: # MiniCPM-V 4.0
573+
emb_dim = 2560
574+
block_count = 27
561575

562576
default_vision_config = {
563577
"hidden_size": 1152,
@@ -577,6 +591,10 @@ def bytes_to_unicode():
577591
elif minicpmv_version == 4:
578592
vision_config = SiglipVisionConfig(**default_vision_config)
579593
model = SiglipVisionTransformer(vision_config)
594+
elif minicpmv_version == 5:
595+
default_vision_config["model_type"] = "siglip_vision_model"
596+
vision_config = SiglipVisionConfig(**default_vision_config)
597+
model = SiglipVisionTransformer(vision_config)
580598

581599
processor = None
582600
# if model.attn_pool is not None:
@@ -603,7 +621,7 @@ def bytes_to_unicode():
603621
else:
604622
fname_middle = ""
605623

606-
output_dir = args.output_dir if args.output_dir is not None else dir_model
624+
output_dir = args.output_dir
607625
os.makedirs(output_dir, exist_ok=True)
608626
output_prefix = os.path.basename(output_dir).replace("ggml_", "")
609627
fname_out = os.path.join(output_dir, f"{fname_middle}model-{ftype_str[ftype]}.gguf")

tools/mtmd/mtmd.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ struct mtmd_context {
207207
tok_row_end_trail = false; // no trailing end-of-row token
208208
ov_img_first = true;
209209

210-
} else if (minicpmv_version == 3 || minicpmv_version == 4) {
210+
} else if (minicpmv_version == 3 || minicpmv_version == 4 || minicpmv_version == 5) {
211211
// minicpmv 2.6 format:
212212
// <slice> (slice) </slice><slice> (slice) </slice>\n ...
213213
slice_tmpl = MTMD_SLICE_TMPL_MINICPMV_2_6;

0 commit comments

Comments
 (0)