We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent a72cb3b commit adea1c9Copy full SHA for adea1c9
src/llama-graph.cpp
@@ -1245,7 +1245,7 @@ llm_graph_input_attn_no_cache * llm_graph_context::build_attn_inp_no_cache() con
1245
llm_graph_input_attn_no_cache * llm_graph_context::build_attn_inp_no_cache_iswa() const {
1246
// Default sliding window size - can be made configurable via cparams
1247
const int n_swa = 128;
1248
-
+
1249
auto inp = std::make_unique<llm_graph_input_attn_no_cache>(hparams, cparams, n_swa);
1250
1251
// note: there is no KV cache, so the number of KV values is equal to the number of tokens in the batch
0 commit comments