-
Notifications
You must be signed in to change notification settings - Fork 13.4k
Closed
Labels
Description
Name and Version
$ llama-cli --version
version: 4942 (fbdfefe)
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
No response
Command line
Problem description & steps to reproduce
The following errors are raised while trying to convert Mistral-Small-3.1-24B-Instruct-2503 with b4942
:
INFO:hf-to-gguf:Set model tokenizer
Traceback (most recent call last):
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 1605, in set_vocab
self._set_vocab_sentencepiece()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 798, in _set_vocab_sentencepiece
tokens, scores, toktypes = self._create_vocab_sentencepiece()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 815, in _create_vocab_sentencepiece
raise FileNotFoundError(f"File not found: {tokenizer_path}")
FileNotFoundError: File not found: ../models/Mistral-Small-3.1-24B-Instruct-2503/tokenizer.model
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 1608, in set_vocab
self._set_vocab_llama_hf()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 893, in _set_vocab_llama_hf
vocab = gguf.LlamaHfVocab(self.dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/workspace/llama.cpp/gguf-py/gguf/vocab.py", line 389, in __init__
raise TypeError('Llama 3 must be converted with BpeVocab')
TypeError: Llama 3 must be converted with BpeVocab
The entire log messages:
(facet) sam@sam-pc:~/workspace/llama.cpp$ llama-cli --version
version: 4942 (fbdfefe7)
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
(facet) sam@sam-pc:~/workspace/llama.cpp$ python convert_hf_to_gguf.py ../models/Mistral-Small-3.1-24B-Instruct-2503
INFO:hf-to-gguf:Loading model: Mistral-Small-3.1-24B-Instruct-2503
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00010.safetensors'
INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {5120, 131072}
INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.0.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.0.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.0.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.0.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.0.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.1.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.1.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.1.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.1.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.1.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.1.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.1.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.2.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.2.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.2.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.2.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00002-of-00010.safetensors'
INFO:hf-to-gguf:blk.2.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.2.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.2.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.3.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.3.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.3.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.3.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.3.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.3.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.4.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.4.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.4.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.4.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.4.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.4.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.4.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.5.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.5.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.5.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.5.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.5.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.5.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.5.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.6.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.6.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.6.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.6.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.6.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00003-of-00010.safetensors'
INFO:hf-to-gguf:blk.10.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.10.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.10.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.10.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.10.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.10.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.10.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.11.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.11.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.11.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.11.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.6.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.6.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.7.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.7.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.7.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.7.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.7.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.7.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.7.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.8.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.8.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.8.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.8.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.8.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.8.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.8.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.9.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.9.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.9.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.9.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.9.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.9.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.9.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00004-of-00010.safetensors'
INFO:hf-to-gguf:blk.11.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.11.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.11.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.12.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.12.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.12.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.12.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.12.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.12.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.13.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.13.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.13.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.13.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.13.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.13.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.14.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.14.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.14.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.14.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.14.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.14.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.14.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.15.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.15.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.15.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.15.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00005-of-00010.safetensors'
INFO:hf-to-gguf:blk.15.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.15.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.15.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.16.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.16.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.16.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.16.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.16.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.16.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.16.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.17.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.17.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.17.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.17.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.17.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.17.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.17.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.18.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.18.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.18.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.18.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.18.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.18.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.18.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.19.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.19.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.19.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.19.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.19.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00006-of-00010.safetensors'
INFO:hf-to-gguf:blk.19.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.19.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.20.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.20.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.20.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.20.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.20.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.20.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.20.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.21.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.21.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.21.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.21.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.21.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.21.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.21.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.22.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.22.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.22.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.22.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.22.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.22.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.22.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.23.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.23.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.23.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.23.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.23.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.23.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.23.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.24.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.24.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.24.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.24.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00007-of-00010.safetensors'
INFO:hf-to-gguf:blk.24.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.24.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.24.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.24.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.25.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.25.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.25.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.25.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.25.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.25.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.25.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.25.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.26.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.26.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.26.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.26.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.26.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.26.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.26.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.26.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.26.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.27.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.27.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.27.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.27.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.27.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.27.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.27.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.27.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.27.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.28.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.28.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.28.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.28.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.28.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00008-of-00010.safetensors'
INFO:hf-to-gguf:blk.28.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.28.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.28.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.28.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.29.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.29.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.29.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.29.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.29.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.29.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.29.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.29.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.29.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.30.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.30.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.30.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.30.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.30.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.30.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.30.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.30.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.30.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.31.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.31.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.31.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.31.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.31.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.31.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.31.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.31.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.31.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.32.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.32.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.32.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.32.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.32.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.32.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00009-of-00010.safetensors'
INFO:hf-to-gguf:blk.32.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.32.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.32.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.33.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.33.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.33.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.33.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.33.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.33.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.33.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.33.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.33.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.34.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.34.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.34.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.34.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.34.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.34.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.34.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.34.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.34.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.35.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.35.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.35.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.35.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.35.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.35.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.35.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.35.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.35.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.36.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.36.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.36.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.36.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.36.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.36.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.36.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.36.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.36.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.37.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.37.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.37.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.37.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:gguf: loading model part 'model-00010-of-00010.safetensors'
INFO:hf-to-gguf:output.weight, torch.bfloat16 --> F16, shape = {5120, 131072}
INFO:hf-to-gguf:blk.37.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.37.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.37.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.37.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.37.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.38.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.38.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.38.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.38.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.38.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.38.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.38.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.38.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.38.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.39.attn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.39.ffn_down.weight, torch.bfloat16 --> F16, shape = {32768, 5120}
INFO:hf-to-gguf:blk.39.ffn_gate.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.39.ffn_up.weight, torch.bfloat16 --> F16, shape = {5120, 32768}
INFO:hf-to-gguf:blk.39.ffn_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.39.attn_k.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:blk.39.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 5120}
INFO:hf-to-gguf:blk.39.attn_q.weight, torch.bfloat16 --> F16, shape = {5120, 4096}
INFO:hf-to-gguf:blk.39.attn_v.weight, torch.bfloat16 --> F16, shape = {5120, 1024}
INFO:hf-to-gguf:output_norm.weight, torch.bfloat16 --> F32, shape = {5120}
INFO:hf-to-gguf:Set meta model
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 131072
INFO:hf-to-gguf:gguf: embedding length = 5120
INFO:hf-to-gguf:gguf: feed forward length = 32768
INFO:hf-to-gguf:gguf: head count = 32
INFO:hf-to-gguf:gguf: key-value head count = 8
INFO:hf-to-gguf:gguf: rope theta = 1000000000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
Traceback (most recent call last):
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 1605, in set_vocab
self._set_vocab_sentencepiece()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 798, in _set_vocab_sentencepiece
tokens, scores, toktypes = self._create_vocab_sentencepiece()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 815, in _create_vocab_sentencepiece
raise FileNotFoundError(f"File not found: {tokenizer_path}")
FileNotFoundError: File not found: ../models/Mistral-Small-3.1-24B-Instruct-2503/tokenizer.model
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 1608, in set_vocab
self._set_vocab_llama_hf()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 893, in _set_vocab_llama_hf
vocab = gguf.LlamaHfVocab(self.dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/workspace/llama.cpp/gguf-py/gguf/vocab.py", line 389, in __init__
raise TypeError('Llama 3 must be converted with BpeVocab')
TypeError: Llama 3 must be converted with BpeVocab
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 5378, in <module>
main()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 5372, in main
model_instance.write()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 441, in write
self.prepare_metadata(vocab_only=False)
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 434, in prepare_metadata
self.set_vocab()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 1611, in set_vocab
self._set_vocab_gpt2()
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 734, in _set_vocab_gpt2
tokens, toktypes, tokpre = self.get_vocab_base()
^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/workspace/llama.cpp/convert_hf_to_gguf.py", line 523, in get_vocab_base
tokenizer = AutoTokenizer.from_pretrained(self.dir_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/miniconda3/envs/facet/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 897, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/miniconda3/envs/facet/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2271, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/miniconda3/envs/facet/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2505, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sam/miniconda3/envs/facet/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 157, in __init__
super().__init__(
File "/home/sam/miniconda3/envs/facet/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 115, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: data did not match any variant of untagged enum ModelWrapper at line 1217944 column 3
First Bad Commit
No response