Skip to content

Crash and a poorly-working workaround for unsupported AMD Radeon cards #1095

@Efenstor

Description

@Efenstor

My AMD Radeon 5600 XT (gfx1010) is not on the supported list and the overrides do not help. I can still switch to the Vulkan backend and have some decent acceleration but there is still a problem.

The problem:

  • Setting just the OLLAMA_VULKAN to 1 in the Instance settings as recommended causes a silent crash when trying to start a chat.

The workaround:

  • Set OLLAMA_VULKAN to 1 and ROCR_VISIBLE_DEVICES to 0.

The caveat:

  • It still will crash every second time I try to start a chat. The crash is silent without any useful debug info in the console.

Console output from the session which ended with a crash:

INFO    [main.py | main] Alpaca version: 9.0.1
INFO    [ollama_instances.py | start] Starting Alpaca's Ollama instance...
INFO    [ollama_instances.py | start] Started Alpaca's Ollama instance
Couldn't find '/home/olaf/.ollama/id_ed25519'. Generating new private key.
Your new public key is: 

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIZCnzz0OOUJw3d8ODhmRP+qxgeS5gybSvlOqP5n/5Jj

time=2026-01-30T14:52:43.068+07:00 level=INFO source=routes.go:1631 msg="server config" env="map[CUDA_VISIBLE_DEVICES:0 GGML_VK_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:0 HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11435 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/extra/ollama OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://127.0.0.1:11435 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false OLLAMA_VULKAN:true ROCR_VISIBLE_DEVICES:0 http_proxy: https_proxy: no_proxy:]"
time=2026-01-30T14:52:43.068+07:00 level=INFO source=images.go:473 msg="total blobs: 5"
time=2026-01-30T14:52:43.069+07:00 level=INFO source=images.go:480 msg="total unused blobs removed: 0"
time=2026-01-30T14:52:43.069+07:00 level=INFO source=routes.go:1684 msg="Listening on 127.0.0.1:11435 (version 0.15.2)"
time=2026-01-30T14:52:43.069+07:00 level=INFO source=runner.go:67 msg="discovering available GPUs..."
time=2026-01-30T14:52:43.069+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" CUDA_VISIBLE_DEVICES=0
time=2026-01-30T14:52:43.069+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" HIP_VISIBLE_DEVICES=0
time=2026-01-30T14:52:43.069+07:00 level=WARN source=runner.go:485 msg="user overrode visible devices" ROCR_VISIBLE_DEVICES=0
time=2026-01-30T14:52:43.069+07:00 level=WARN source=runner.go:489 msg="if GPUs are not correctly discovered, unset and try again"
time=2026-01-30T14:52:43.070+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 45441"
INFO    [ollama_instances.py | start] Ollama version is 0.15.2
time=2026-01-30T14:52:43.097+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 40579"
time=2026-01-30T14:52:43.118+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 37799"
time=2026-01-30T14:52:43.173+07:00 level=INFO source=types.go:42 msg="inference compute" id=00000000-0a00-0000-0000-000000000000 filter_id="" library=Vulkan compute=0.0 name=Vulkan0 description="AMD Radeon RX 5600 XT (RADV NAVI10)" libdirs=ollama,vulkan driver=0.0 pci_id=0000:0a:00.0 type=discrete total="6.0 GiB" available="1.2 GiB"
time=2026-01-30T14:52:43.174+07:00 level=INFO source=routes.go:1725 msg="entering low vram mode" "total vram"="6.0 GiB" threshold="20.0 GiB"
[GIN] 2026/01/30 - 14:52:43 | 200 |     390.848µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2026/01/30 - 14:52:43 | 200 |  201.448636ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2026/01/30 - 14:52:50 | 200 |     272.188µs |       127.0.0.1 | GET      "/api/tags"
time=2026-01-30T14:52:50.301+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --ollama-engine --port 39251"
[GIN] 2026/01/30 - 14:52:50 | 200 |  223.677535ms |       127.0.0.1 | POST     "/api/show"
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /media/extra/ollama/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 1
print_info: no_alloc         = 0
print_info: model type       = ?B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
llama_model_load: vocab only - skipping tensors
time=2026-01-30T14:52:50.783+07:00 level=INFO source=server.go:429 msg="starting runner" cmd="/home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/bin/ollama runner --model /media/extra/ollama/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --port 35949"
time=2026-01-30T14:52:50.784+07:00 level=INFO source=sched.go:452 msg="system memory" total="31.3 GiB" free="22.6 GiB" free_swap="29.8 GiB"
time=2026-01-30T14:52:50.784+07:00 level=INFO source=sched.go:459 msg="gpu memory" id=00000000-0a00-0000-0000-000000000000 library=Vulkan available="764.1 MiB" free="1.2 GiB" minimum="457.0 MiB" overhead="0 B"
time=2026-01-30T14:52:50.784+07:00 level=INFO source=server.go:496 msg="loading model" "model layers"=33 requested=-1
time=2026-01-30T14:52:50.784+07:00 level=INFO source=device.go:245 msg="model weights" device=CPU size="4.3 GiB"
time=2026-01-30T14:52:50.784+07:00 level=INFO source=device.go:256 msg="kv cache" device=CPU size="2.0 GiB"
time=2026-01-30T14:52:50.784+07:00 level=INFO source=device.go:262 msg="compute graph" device=Vulkan0 size="1.1 GiB"
time=2026-01-30T14:52:50.784+07:00 level=INFO source=device.go:272 msg="total memory" size="7.4 GiB"
time=2026-01-30T14:52:50.794+07:00 level=INFO source=runner.go:965 msg="starting go runner"
load_backend: loaded CPU backend from /home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/lib/ollama/libggml-cpu-haswell.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 5600 XT (RADV NAVI10) (radv) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 0 | matrix cores: none
load_backend: loaded Vulkan backend from /home/olaf/.var/app/com.jeffser.Alpaca/data/ollama_installation/lib/ollama/vulkan/libggml-vulkan.so
time=2026-01-30T14:52:50.822+07:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)
time=2026-01-30T14:52:50.823+07:00 level=INFO source=runner.go:1001 msg="Server listening on 127.0.0.1:35949"
time=2026-01-30T14:52:50.827+07:00 level=INFO source=runner.go:895 msg=load request="{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Auto KvSize:16384 KvCacheType: NumThreads:8 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}"
ggml_backend_vk_get_device_memory called: uuid 00000000-0a00-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_hip_get_device_memory searching for device 0000:0a:00.0
ggml_backend_vk_get_device_memory device 0000:0a:00.0 utilizing AMD specific memory reporting free: 1280389120 total: 6425673728
time=2026-01-30T14:52:50.828+07:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-30T14:52:50.829+07:00 level=INFO source=server.go:1381 msg="waiting for server to become available" status="llm server loading model"
ggml_backend_vk_get_device_memory called: uuid 00000000-0a00-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_hip_get_device_memory searching for device 0000:0a:00.0
ggml_backend_vk_get_device_memory device 0000:0a:00.0 utilizing AMD specific memory reporting free: 1280389120 total: 6425673728
ggml_backend_vk_get_device_memory called: uuid 00000000-0a00-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_hip_get_device_memory searching for device 0000:0a:00.0
ggml_backend_vk_get_device_memory device 0000:0a:00.0 utilizing AMD specific memory reporting free: 1280389120 total: 6425673728
ggml_backend_vk_get_device_memory called: uuid 00000000-0a00-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_hip_get_device_memory searching for device 0000:0a:00.0
ggml_backend_vk_get_device_memory device 0000:0a:00.0 utilizing AMD specific memory reporting free: 1280389120 total: 6425673728
llama_model_load_from_file_impl: using device Vulkan0 (AMD Radeon RX 5600 XT (RADV NAVI10)) (0000:0a:00.0) - 1221 MiB free
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /media/extra/ollama/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv   3:                           general.finetune str              = Instruct
llama_model_loader: - kv   4:                           general.basename str              = Meta-Llama-3.1
llama_model_loader: - kv   5:                         general.size_label str              = 8B
llama_model_loader: - kv   6:                            general.license str              = llama3.1
llama_model_loader: - kv   7:                               general.tags arr[str,6]       = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv   8:                          general.languages arr[str,8]       = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 131072
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  25:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  26:                tokenizer.ggml.eos_token_id u32              = 128009
llama_model_loader: - kv  27:                    tokenizer.chat_template str              = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 4.58 GiB (4.89 BPW) 
load: printing all EOG tokens:
load:   - 128001 ('<|end_of_text|>')
load:   - 128008 ('<|eom_id|>')
load:   - 128009 ('<|eot_id|>')
load: special tokens cache size = 256
load: token to piece cache size = 0.7999 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: no_alloc         = 0
print_info: n_ctx_train      = 131072
print_info: n_embd           = 4096
print_info: n_embd_inp       = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 8
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 4
print_info: n_embd_k_gqa     = 1024
print_info: n_embd_v_gqa     = 1024
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 14336
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: n_expert_groups  = 0
print_info: n_group_used     = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 500000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 131072
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned   = unknown
print_info: model type       = 8B
print_info: model params     = 8.03 B
print_info: general.name     = Meta Llama 3.1 8B Instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 128256
print_info: n_merges         = 280147
print_info: BOS token        = 128000 '<|begin_of_text|>'
print_info: EOS token        = 128009 '<|eot_id|>'
print_info: EOT token        = 128009 '<|eot_id|>'
print_info: EOM token        = 128008 '<|eom_id|>'
print_info: LF token         = 198 'Ċ'
print_info: EOG token        = 128001 '<|end_of_text|>'
print_info: EOG token        = 128008 '<|eom_id|>'
print_info: EOG token        = 128009 '<|eot_id|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
ggml_backend_vk_get_device_memory called: uuid 00000000-0a00-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_hip_get_device_memory searching for device 0000:0a:00.0
ggml_backend_vk_get_device_memory device 0000:0a:00.0 utilizing AMD specific memory reporting free: 1280372736 total: 6425673728
ggml_backend_vk_get_device_memory called: uuid 00000000-0a00-0000-0000-000000000000
ggml_backend_vk_get_device_memory called: luid 0x0000000000000000
ggml_hip_get_device_memory searching for device 0000:0a:00.0
ggml_backend_vk_get_device_memory device 0000:0a:00.0 utilizing AMD specific memory reporting free: 1280372736 total: 6425673728
load_tensors: offloading 0 repeating layers to GPU
load_tensors: offloaded 0/33 layers to GPU
load_tensors:  Vulkan_Host model buffer size =  4685.31 MiB
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 16384
llama_context: n_ctx_seq     = 16384
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = auto
llama_context: kv_unified    = false
llama_context: freq_base     = 500000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_seq (16384) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_context:        CPU  output buffer size =     0.50 MiB
llama_kv_cache:        CPU KV buffer size =  2048.00 MiB
llama_kv_cache: size = 2048.00 MiB ( 16384 cells,  32 layers,  1/1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_context: Flash Attention was auto, set to enabled
llama_context:    Vulkan0 compute buffer size =   669.48 MiB
llama_context: Vulkan_Host compute buffer size =    40.02 MiB
llama_context: graph nodes  = 999
llama_context: graph splits = 389 (with bs=512), 1 (with bs=1)
time=2026-01-30T14:52:53.087+07:00 level=INFO source=server.go:1385 msg="llama runner started in 2.30 seconds"
time=2026-01-30T14:52:53.087+07:00 level=INFO source=sched.go:526 msg="loaded runners" count=1
time=2026-01-30T14:52:53.087+07:00 level=INFO source=server.go:1347 msg="waiting for llama runner to start responding"
time=2026-01-30T14:52:53.087+07:00 level=INFO source=server.go:1385 msg="llama runner started in 2.30 seconds"

(Alpaca:2): Gtk-WARNING **: 14:52:53.722: Invalid text buffer iterator: either the iterator is uninitialized, or the characters/paintables/widgets in the buffer have been modified since the iterator was created.
You must use marks, character numbers, or line numbers to preserve a position across buffer modifications.
You can apply tags and insert marks without invalidating your iterators,
but any mutation that affects 'indexable' buffer contents (contents that can be referred to by character offset)
will invalidate all outstanding iterators
[GIN] 2026/01/30 - 14:54:22 | 200 |         1m32s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2026/01/30 - 14:54:25 | 200 |         1m35s |       127.0.0.1 | POST     "/api/chat"

(Alpaca:2): Gtk-CRITICAL **: 14:54:57.598: gtk_text_attributes_ref: assertion 'values != NULL' failed

(Alpaca:2): Pango-CRITICAL **: 14:54:57.598: pango_layout_new: assertion 'context != NULL' failed

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions