Skip to content

Commit 3762f43

Browse files
add comment [no ci]
1 parent 1de69b8 commit 3762f43

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

src/llama-context.cpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -332,6 +332,7 @@ llama_context::llama_context(
332332
LLAMA_LOG_WARN("%s: layer %d is assigned to device %s but the Flash Attention tensor "
333333
"is assigned to device %s (usually due to missing support)\n",
334334
__func__, il, ggml_backend_dev_name(device_kv), ggml_backend_dev_name(device_fa));
335+
// FIXME: fa_device_mismatch logic is wrong for --no-kv-offload, but this is broken anyways
335336
fa_device_mismatch = true;
336337
break;
337338
}

0 commit comments

Comments
 (0)