Skip to content

Conversation

Tianyue-Zhao
Copy link

In #10573, support was added for HF variants of the GLM-4-9B model.
However, the edits to the conversion script broke the conversion for non-HF checkpoints.
This MR simply fixes this so that the conversion script works with both.

Current master branch:

~/llama.cpp# python convert_hf_to_gguf.py ../glm-4-9b
INFO:hf-to-gguf:Loading model: glm-4-9b
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
...
INFO:hf-to-gguf:Set meta model
INFO:hf-to-gguf:Set model parameters
Traceback (most recent call last):
  File "/root/llama.cpp/convert_hf_to_gguf.py", line 5666, in <module>
    main()
  File "/root/llama.cpp/convert_hf_to_gguf.py", line 5660, in main
    model_instance.write()
  File "/root/llama.cpp/convert_hf_to_gguf.py", line 459, in write
    self.prepare_metadata(vocab_only=False)
  File "/root/llama.cpp/convert_hf_to_gguf.py", line 449, in prepare_metadata
    self.set_gguf_parameters()
  File "/root/llama.cpp/convert_hf_to_gguf.py", line 5077, in set_gguf_parameters
    self.gguf_writer.add_block_count(self.hparams.get("num_layers", self.hparams["num_hidden_layers"]))
KeyError: 'num_hidden_layers'

After fix:
Confirmed that both python convert_hf_to_gguf.py ../glm-4-9b and python convert_hf_to_gguf.py ../glm-4-9b-hf work without issue.

@github-actions github-actions bot added the python python script changes label Apr 17, 2025
@Tianyue-Zhao Tianyue-Zhao requested a review from ngxson April 20, 2025 04:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

python python script changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants