Skip to content

Eval bug: Cannot convert nomic-embed-code to gguf #13242

@rudiservo

Description

@rudiservo

Name and Version

root@ff22031b6dce:/app# ./llama-cli --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
load_backend: loaded CUDA backend from /app/libggml-cuda.so
load_backend: loaded CPU backend from /app/libggml-cpu-sandybridge.so
version: 5237 (e1e8e09)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

Operating systems

Linux

GGML backends

CUDA

Hardware

FX-8350, GTX1070

Models

nomic-embed-code

Problem description & steps to reproduce

trying to convert to GGUF to quantize and test

First Bad Commit

No response

Relevant log output

root@ff22031b6dce:/app# ./convert_hf_to_gguf.py /models/nomic-embed-code/ 
INFO:hf-to-gguf:Loading model: nomic-embed-code
INFO:hf-to-gguf:Model architecture: Qwen2Model
ERROR:hf-to-gguf:Model Qwen2Model is not supported

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions