- 
                Notifications
    
You must be signed in to change notification settings  - Fork 13.5k
 
Closed
Labels
Description
Name and Version
version: 5215 (5f5e39e)
built with MSVC 19.43.34808.0 for x64
i've tested CPU, Vulkan, and SYCL, llama-bench either crashes and burns, or outputs the following 2 lines and then exits:
| model                          |       size |     params | backend    | ngl |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
Operating systems
Windows
GGML backends
CPU
Hardware
Ryzen 7900X + Intel A770
Models
i tried several which are currently working with llama-server
Problem description & steps to reproduce
.\llama-bench.exe -m model
First Bad Commit
No response
Relevant log output
PS C:\Users\ANON\repos\AI_Grotto\llama.cpp\Windows\CPU\AVX512> .\llama-bench.exe -m C:\LLM\google-gemma-3-12b-it-qat-q4_0-gguf-small\gemma-3-12b-it-q4_0_s.gguf
| model                          |       size |     params | backend    | ngl |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
PS C:\Users\ANON\repos\AI_Grotto\llama.cpp\Windows\CPU\AVX512>