Added updated check for multi modal projector and vision projector in… #15515
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Modifying convert_hf_to_gguf.py
Make sure to read the contributing guidelines before submitting a PR
Fixing this issue when converting Gemma 3 12B+27B models from safetensor to gguf when the tensor is named 'model.multi_modal_projector.' instead of 'multi_modal_projector.'
INFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {3840, 2048}
INFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3840, 15360}
INFO:hf-to-gguf:blk.4.ffn_up.weight, torch.bfloat16 --> F16, shape = {3840, 15360}
INFO:hf-to-gguf:blk.4.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {256}
INFO:hf-to-gguf:blk.4.attn_k.weight, torch.bfloat16 --> F16, shape = {3840, 2048}
INFO:hf-to-gguf:blk.4.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 3840}
INFO:hf-to-gguf:blk.4.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {256}
INFO:hf-to-gguf:blk.4.attn_q.weight, torch.bfloat16 --> F16, shape = {3840, 4096}
INFO:hf-to-gguf:blk.4.attn_v.weight, torch.bfloat16 --> F16, shape = {3840, 2048}
Traceback (most recent call last):
File "/workspace/llama.cpp/convert_hf_to_gguf.py", line 8817, in
main()
File "/workspace/llama.cpp/convert_hf_to_gguf.py", line 8811, in main
model_instance.write()
File "/workspace/llama.cpp/convert_hf_to_gguf.py", line 431, in write
self.prepare_tensors()
File "/workspace/llama.cpp/convert_hf_to_gguf.py", line 298, in prepare_tensors
for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/llama.cpp/convert_hf_to_gguf.py", line 5054, in modify_tensors
return [(self.map_tensor_name(name), data_torch)]
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/llama.cpp/convert_hf_to_gguf.py", line 257, in map_tensor_name
raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.multi_modal_projector.mm_input_projection_weight'