Skip to content

Commit 9d262f4

Browse files
authored
server : remove swa_full warning (ggml-org#15399)
1 parent f0d3c74 commit 9d262f4

File tree

1 file changed

+0
-5
lines changed

1 file changed

+0
-5
lines changed

src/llama-context.cpp

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -145,11 +145,6 @@ llama_context::llama_context(
145145
__func__, n_ctx_per_seq, hparams.n_ctx_train);
146146
}
147147

148-
if (!params.swa_full && cparams.n_seq_max > 1 && hparams.is_swa_any()) {
149-
LLAMA_LOG_WARN("%s: requested n_seq_max (%u) > 1, but swa_full is not enabled -- performance may be degraded: %s\n",
150-
__func__, cparams.n_seq_max, "https://github.com/ggml-org/llama.cpp/pull/13845#issuecomment-2924800573");
151-
}
152-
153148
if (!hparams.vocab_only) {
154149
// GPU backends
155150
for (auto * dev : model.devices) {

0 commit comments

Comments
 (0)