Skip to content

Eval bug: Running LiquidAI/LFM2-2.6B-GGUF in parallel triggers GGML_ASSERT regading ubatch.seq_id in KV Cache #16278

@Blackskyliner

Description

@Blackskyliner

Name and Version

% llama-cli --version
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.005 sec
ggml_metal_device_init: GPU name: Apple M4 Max
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal4 (5002)
ggml_metal_device_init: simdgroup reduction = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory = true
ggml_metal_device_init: has bfloat = true
ggml_metal_device_init: use residency sets = true
ggml_metal_device_init: use shared buffers = true
ggml_metal_device_init: recommendedMaxWorkingSetSize = 115448.73 MB
version: 6550 (3ecb2f6)
built with Apple clang version 17.0.0 (clang-1700.0.13.3) for arm64-apple-darwin24.4.0

Operating systems

Mac

GGML backends

Metal

Hardware

MacStudio M4 Max CPU:16C/GPU:40C/RAM:128GB

Models

LiquidAI/LFM2-2.6B-GGUF

Problem description & steps to reproduce

  • Run llama-server -hf LiquidAI/LFM2-2.6B-GGUF -c 0 -np 10 --flash-attn on --mlock --no-mmap
  • Try to Benchmark with LLMApiBenchmark: ./llmapibenchmark_darwin_arm64 -u http://127.0.0.1:8080/v1 -c 1,5,10 -t 4096
  • Alternative with curl:
     for _ in 1 2 3 4 5; do curl http://127.0.0.1:8080/v1/chat/completions \
         -H "Content-Type: application/json" \
         -d '{ "messages": [{"role": "user", "content": "Write me something about frogs.", "temperature": 0.1}] }' & done ; wait
    

First Bad Commit

No response

Relevant log output

% llama-server -hf LiquidAI/LFM2-2.6B-GGUF --host 0.0.0.0 -c 0 -np 10 --flash-attn on --mlock --no-mmap
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.005 sec
ggml_metal_device_init: GPU name:   Apple M4 Max
ggml_metal_device_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal4  (5002)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 115448.73 MB
common_download_file_single_online: using cached file: /Users/benutzer/Library/Caches/llama.cpp/LiquidAI_LFM2-2.6B-GGUF_LFM2-2.6B-Q4_K_M.gguf
build: 6550 (3ecb2f67) with Apple clang version 17.0.0 (clang-1700.0.13.3) for arm64-apple-darwin24.4.0
system info: n_threads = 12, n_threads_batch = 12, total_threads = 16

system_info: n_threads = 12 (n_threads_batch = 12) / 16 | Metal : EMBED_LIBRARY = 1 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | ACCELERATE = 1 | REPACK = 1 |

main: binding port with default address family
main: HTTP server is listening, hostname: 0.0.0.0, port: 8080, http threads: 15
main: loading model
srv    load_model: loading model '/Users/benutzer/Library/Caches/llama.cpp/LiquidAI_LFM2-2.6B-GGUF_LFM2-2.6B-Q4_K_M.gguf'
llama_model_load_from_file_impl: using device Metal (Apple M4 Max) (unknown id) - 110100 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 266 tensors from /Users/benutzer/Library/Caches/llama.cpp/LiquidAI_LFM2-2.6B-GGUF_LFM2-2.6B-Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = lfm2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Bd63F69Daf9378770Be9430369E1D68F3Fe34282
llama_model_loader: - kv   3:                         general.size_label str              = 2.6B
llama_model_loader: - kv   4:                            general.license str              = other
llama_model_loader: - kv   5:                       general.license.name str              = lfm1.0
llama_model_loader: - kv   6:                       general.license.link str              = LICENSE
llama_model_loader: - kv   7:                               general.tags arr[str,4]       = ["liquid", "lfm2", "edge", "text-gene...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "ar", "zh", "fr", "de", "ja", ...
llama_model_loader: - kv   9:                           lfm2.block_count u32              = 30
llama_model_loader: - kv  10:                        lfm2.context_length u32              = 128000
llama_model_loader: - kv  11:                      lfm2.embedding_length u32              = 2048
llama_model_loader: - kv  12:                   lfm2.feed_forward_length u32              = 10752
llama_model_loader: - kv  13:                  lfm2.attention.head_count u32              = 32
llama_model_loader: - kv  14:               lfm2.attention.head_count_kv arr[i32,30]      = [0, 0, 8, 0, 0, 8, 0, 0, 0, 8, 0, 0, ...
llama_model_loader: - kv  15:                        lfm2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  16:      lfm2.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                            lfm2.vocab_size u32              = 65536
llama_model_loader: - kv  18:                     lfm2.shortconv.l_cache u32              = 3
llama_model_loader: - kv  19:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  20:                         tokenizer.ggml.pre str              = lfm2
llama_model_loader: - kv  21:                      tokenizer.ggml.tokens arr[str,65536]   = ["<|pad|>", "<|startoftext|>", "<|end...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,65536]   = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  23:                      tokenizer.ggml.merges arr[str,63683]   = ["Ċ Ċ", "Ċ ĊĊ", "ĊĊ Ċ", "Ċ �...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 7
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  28:               tokenizer.ggml.add_sep_token bool             = false
llama_model_loader: - kv  29:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  30:                    tokenizer.chat_template str              = {{- bos_token -}}{%- set system_promp...
llama_model_loader: - kv  31:               general.quantization_version u32              = 2
llama_model_loader: - kv  32:                          general.file_type u32              = 15
llama_model_loader: - type  f32:   99 tensors
llama_model_loader: - type q4_K:  148 tensors
llama_model_loader: - type q6_K:   19 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 1.45 GiB (4.86 BPW)
load: printing all EOG tokens:
load:   - 2 ('<|endoftext|>')
load:   - 7 ('<|im_end|>')
load: special tokens cache size = 507
load: token to piece cache size = 0.3756 MB
print_info: arch             = lfm2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 128000
print_info: n_embd           = 2048
print_info: n_layer          = 30
print_info: n_head           = 32
print_info: n_head_kv        = [0, 0, 8, 0, 0, 8, 0, 0, 0, 8, 0, 0, 0, 8, 0, 0, 0, 8, 0, 0, 0, 8, 0, 0, 8, 0, 0, 8, 0, 0]
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = [0, 0, 4, 0, 0, 4, 0, 0, 0, 4, 0, 0, 0, 4, 0, 0, 0, 4, 0, 0, 0, 4, 0, 0, 4, 0, 0, 4, 0, 0]
print_info: n_embd_k_gqa     = [0, 0, 512, 0, 0, 512, 0, 0, 0, 512, 0, 0, 0, 512, 0, 0, 0, 512, 0, 0, 0, 512, 0, 0, 512, 0, 0, 512, 0, 0]
print_info: n_embd_v_gqa     = [0, 0, 512, 0, 0, 512, 0, 0, 0, 512, 0, 0, 0, 512, 0, 0, 0, 512, 0, 0, 0, 512, 0, 0, 512, 0, 0, 512, 0, 0]
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 10752
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 128000
print_info: rope_finetuned   = unknown
print_info: model type       = 1.2B
print_info: model params     = 2.57 B
print_info: general.name     = Bd63F69Daf9378770Be9430369E1D68F3Fe34282
print_info: vocab type       = BPE
print_info: n_vocab          = 65536
print_info: n_merges         = 63683
print_info: BOS token        = 1 '<|startoftext|>'
print_info: EOS token        = 7 '<|im_end|>'
print_info: EOT token        = 2 '<|endoftext|>'
print_info: PAD token        = 0 '<|pad|>'
print_info: LF token         = 708 'Ċ'
print_info: EOG token        = 2 '<|endoftext|>'
print_info: EOG token        = 7 '<|im_end|>'
print_info: max token length = 30
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 30 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 31/31 layers to GPU
load_tensors:          CPU model buffer size =   105.01 MiB
load_tensors:        Metal model buffer size =  1488.94 MiB
.........................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 10
llama_context: n_ctx         = 128000
llama_context: n_ctx_per_seq = 12800
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = enabled
llama_context: kv_unified    = false
llama_context: freq_base     = 1000000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (12800) < n_ctx_train (128000) -- the full capacity of the model will not be utilized
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M4 Max
ggml_metal_init: picking default device: Apple M4 Max
ggml_metal_init: use bfloat         = true
ggml_metal_init: use fusion         = true
ggml_metal_init: use concurrency    = true
ggml_metal_init: use graph optimize = true
llama_context:        CPU  output buffer size =     2.50 MiB
llama_kv_cache:      Metal KV buffer size = 20000.00 MiB
llama_kv_cache: size = 20000.00 MiB (128000 cells,   8 layers, 10/10 seqs), K (f16): 10000.00 MiB, V (f16): 10000.00 MiB
llama_memory_recurrent:      Metal RS buffer size =     3.44 MiB
llama_memory_recurrent: size =    3.44 MiB (    10 cells,  30 layers, 10 seqs), R (f32):    3.44 MiB, S (f32):    0.00 MiB
llama_context:      Metal compute buffer size =   477.04 MiB
llama_context:        CPU compute buffer size =   316.58 MiB
llama_context: graph nodes  = 1015
llama_context: graph splits = 5
common_init_from_params: added <|endoftext|> logit bias = -inf
common_init_from_params: added <|im_end|> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 128000
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv          init: initializing slots, n_slots = 10
slot         init: id  0 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  1 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  2 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  3 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  4 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  5 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  6 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  7 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  8 | task -1 | new slot n_ctx_slot = 12800
slot         init: id  9 | task -1 | new slot n_ctx_slot = 12800
srv          init: Enable thinking? 0
main: model loaded
main: chat template, chat_template: {{- bos_token -}}{%- set system_prompt = "" -%}{%- set ns = namespace(system_prompt="") -%}{%- if messages[0]["role"] == "system" -%} {%- set ns.system_prompt = messages[0]["content"] -%} {%- set messages = messages[1:] -%}{%- endif -%}{%- if tools -%} {%- set ns.system_prompt = ns.system_prompt + ("
" if ns.system_prompt else "") + "List of tools: <|tool_list_start|>[" -%} {%- for tool in tools -%} {%- if tool is not string -%} {%- set tool = tool | tojson -%} {%- endif -%} {%- set ns.system_prompt = ns.system_prompt + tool -%} {%- if not loop.last -%} {%- set ns.system_prompt = ns.system_prompt + ", " -%} {%- endif -%} {%- endfor -%} {%- set ns.system_prompt = ns.system_prompt + "]<|tool_list_end|>" -%}{%- endif -%}{%- if ns.system_prompt -%} {{- "<|im_start|>system
" + ns.system_prompt + "<|im_end|>
" -}}{%- endif -%}{%- for message in messages -%} {{- "<|im_start|>" + message["role"] + "
" -}} {%- set content = message["content"] -%} {%- if content is not string -%} {%- set content = content | tojson -%} {%- endif -%} {%- if message["role"] == "tool" -%} {%- set content = "<|tool_response_start|>" + content + "<|tool_response_end|>" -%} {%- endif -%} {{- content + "<|im_end|>
" -}}{%- endfor -%}{%- if add_generation_prompt -%} {{- "<|im_start|>assistant
" -}}{%- endif -%}, example_format: '<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Hello<|im_end|>
<|im_start|>assistant
Hi there<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
'
main: server is listening on http://0.0.0.0:8080 - starting the main loop
srv  update_slots: all slots are idle
srv  log_server_r: request: GET /v1/models 192.168.178.109 200
srv  params_from_: Chat format: Content-only
slot get_availabl: id  9 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id  9 | task 0 | processing task
slot update_slots: id  9 | task 0 | new prompt, n_ctx_slot = 12800, n_keep = 0, n_prompt_tokens = 32
slot update_slots: id  9 | task 0 | kv cache rm [0, end)
slot update_slots: id  9 | task 0 | prompt processing progress, n_past = 32, n_tokens = 32, progress = 1.000000
slot update_slots: id  9 | task 0 | prompt done, n_past = 32, n_tokens = 32
slot      release: id  9 | task 0 | stop processing: n_past = 35, truncated = 0
slot print_timing: id  9 | task 0 |
prompt eval time =     205.61 ms /    32 tokens (    6.43 ms per token,   155.64 tokens per second)
       eval time =      26.12 ms /     4 tokens (    6.53 ms per token,   153.12 tokens per second)
      total time =     231.73 ms /    36 tokens
srv  update_slots: all slots are idle
srv  log_server_r: request: POST /v1/chat/completions 192.168.178.109 200
srv  log_server_r: request: GET / 192.168.178.109 200
srv  log_server_r: request: GET / 192.168.178.109 200
srv  log_server_r: request: GET / 192.168.178.109 200
srv  log_server_r: request: GET / 192.168.178.109 200
srv  log_server_r: request: GET / 192.168.178.109 200
srv  params_from_: Chat format: Content-only
slot get_availabl: id  9 | task 0 | selected slot by lcs similarity, lcs_len = 32, similarity = 0.914 (> 0.100 thold)
slot launch_slot_: id  9 | task 5 | processing task
slot update_slots: id  9 | task 5 | new prompt, n_ctx_slot = 12800, n_keep = 0, n_prompt_tokens = 32
slot update_slots: id  9 | task 5 | n_past = 32, cache_tokens.size() = 35, seq_id = 9, pos_min = 34, n_swa = 0
slot update_slots: id  9 | task 5 | forcing full prompt re-processing due to lack of cache data (likely due to SWA, see https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055)
slot update_slots: id  9 | task 5 | kv cache rm [0, end)
slot update_slots: id  9 | task 5 | prompt processing progress, n_past = 32, n_tokens = 32, progress = 1.000000
slot update_slots: id  9 | task 5 | prompt done, n_past = 32, n_tokens = 32
slot      release: id  9 | task 5 | stop processing: n_past = 4127, truncated = 0
slot print_timing: id  9 | task 5 |
prompt eval time =      32.33 ms /    32 tokens (    1.01 ms per token,   989.88 tokens per second)
       eval time =   34059.48 ms /  4096 tokens (    8.32 ms per token,   120.26 tokens per second)
      total time =   34091.81 ms /  4128 tokens
srv  update_slots: all slots are idle
srv  log_server_r: request: POST /v1/chat/completions 192.168.178.109 200
srv  params_from_: Chat format: Content-only
slot get_availabl: id  8 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id  8 | task 4102 | processing task
slot update_slots: id  8 | task 4102 | new prompt, n_ctx_slot = 12800, n_keep = 0, n_prompt_tokens = 32
srv  params_from_: Chat format: Content-only
srv  params_from_: Chat format: Content-only
slot update_slots: id  8 | task 4102 | kv cache rm [0, end)
slot update_slots: id  8 | task 4102 | prompt processing progress, n_past = 32, n_tokens = 32, progress = 1.000000
slot update_slots: id  8 | task 4102 | prompt done, n_past = 32, n_tokens = 32
srv  params_from_: Chat format: Content-only
srv  params_from_: Chat format: Content-only
slot get_availabl: id  7 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id  7 | task 4104 | processing task
slot get_availabl: id  6 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id  6 | task 4105 | processing task
slot get_availabl: id  5 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id  5 | task 4106 | processing task
slot get_availabl: id  4 | task -1 | selected slot by LRU, t_last = -1
slot launch_slot_: id  4 | task 4107 | processing task
slot update_slots: id  4 | task 4107 | new prompt, n_ctx_slot = 12800, n_keep = 0, n_prompt_tokens = 32
slot update_slots: id  4 | task 4107 | kv cache rm [0, end)
slot update_slots: id  4 | task 4107 | prompt processing progress, n_past = 32, n_tokens = 33, progress = 1.000000
slot update_slots: id  4 | task 4107 | prompt done, n_past = 32, n_tokens = 33
slot update_slots: id  5 | task 4106 | new prompt, n_ctx_slot = 12800, n_keep = 0, n_prompt_tokens = 32
slot update_slots: id  5 | task 4106 | kv cache rm [0, end)
slot update_slots: id  5 | task 4106 | prompt processing progress, n_past = 32, n_tokens = 65, progress = 1.000000
slot update_slots: id  5 | task 4106 | prompt done, n_past = 32, n_tokens = 65
slot update_slots: id  6 | task 4105 | new prompt, n_ctx_slot = 12800, n_keep = 0, n_prompt_tokens = 32
slot update_slots: id  6 | task 4105 | kv cache rm [0, end)
slot update_slots: id  6 | task 4105 | prompt processing progress, n_past = 32, n_tokens = 97, progress = 1.000000
slot update_slots: id  6 | task 4105 | prompt done, n_past = 32, n_tokens = 97
slot update_slots: id  7 | task 4104 | new prompt, n_ctx_slot = 12800, n_keep = 0, n_prompt_tokens = 32
slot update_slots: id  7 | task 4104 | kv cache rm [0, end)
slot update_slots: id  7 | task 4104 | prompt processing progress, n_past = 32, n_tokens = 129, progress = 1.000000
slot update_slots: id  7 | task 4104 | prompt done, n_past = 32, n_tokens = 129
/private/tmp/llama.cpp-20250922-5331-txc1qw/src/llama-kv-cache.cpp:756: GGML_ASSERT(ubatch.seq_id [s*n_tokens][0] == seq_id) failed
(lldb) process attach --pid 45101
error: attach failed: this is a non-interactive debug session, cannot get permission to debug processes.
zsh: abort      llama-server -hf LiquidAI/LFM2-2.6B-GGUF --host 0.0.0.0 -c 0 -np 10  on

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions