Skip to content

Erreur lors de la conversion des types utilisant la V2 de LLama #35

@pereconteur

Description

@pereconteur

Lors de la conversion, il semble que le tokenizer et les poids ne correspondent pas : voici mon erreur en utilisant python3.10, vigogne-2-7b-chat :

Loading model file models/vigogne-2-7b-chat_path/consolidated.00.pth
params = Params(n_vocab=-1, n_embd=4096, n_layer=32, n_ctx=2048, n_ff=11008, n_head=32, n_head_kv=32, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=None, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('models/vigogne-2-7b-chat_path'))
Loading vocab file 'models/tokenizer.model', type 'spm'
tok_embeddings.weight                            -> token_embd.weight                        | F16    | [32000, 4096]
layers.0.attention.wq.weight                     -> blk.0.attn_q.weight                      | F16    | [4096, 4096]
layers.0.attention.wk.weight                     -> blk.0.attn_k.weight                      | F16    | [4096, 4096]
[...]
output.weight                                    -> output.weight                            | F16    | [32000, 4096]
Writing models/vigogne-2-7b-chat_path/ggml-model-f16.gguf, format 1
Traceback (most recent call last):
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 1204, in <module>
    main()
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 1199, in main
    OutputFile.write_all(outfile, ftype, params, model, vocab, special_vocab, concurrency = args.concurrency, endianess=endianess)
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 909, in write_all
    check_vocab_size(params, vocab)
  File "/Users/pereconteur/Desktop/IA/vigogne/scripts/../../llama.cpp/convert.py", line 796, in check_vocab_size
    raise Exception(msg)
Exception: Vocab size mismatch (model has -1, but models/tokenizer.model has 32000).
Traceback (most recent call last):
  File "/Users/pereconteur/Desktop/IA/Vigogne_auto_installer/vigogne_V2_auto_install_Mac_Linux.py", line 269, in <module>
    check_requirements()
  File "/Users/pereconteur/Desktop/IA/Vigogne_auto_installer/vigogne_V2_auto_install_Mac_Linux.py", line 263, in check_requirements
    goForVigogneInstallation(model_vig, poids_B, type_vig)
  File "/Users/pereconteur/Desktop/IA/Vigogne_auto_installer/vigogne_V2_auto_install_Mac_Linux.py", line 205, in goForVigogneInstallation
    subprocess.run(commandConvert, check=True)
  File "/Users/pereconteur/miniconda3/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['python3.10', '../../llama.cpp/convert.py', 'models/vigogne-2-7b-chat_path']' returned non-zero exit status 1.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions