Skip to content

Conversation

@leok7v
Copy link

@leok7v leok7v commented Aug 9, 2025

I believe it must be llama_memory_seq_rm(mem, 0, ...)

because of

llama_kv_cache_unified::seq_rm
GGML_ASSERT(seq_id >= 0 && (size_t) seq_id < seq_to_stream.size());

See:
745aa53#r163727848

because

GGML_ASSERT(seq_id >= 0 && (size_t) seq_id < seq_to_stream.size());

in llama_kv_cache_unified::seq_rm
Copilot AI review requested due to automatic review settings August 9, 2025 21:35
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes incorrect sequence ID parameters in llama_memory_seq_rm function calls, changing from -1 to 0 to comply with the function's assertion requirements that sequence IDs must be non-negative.

  • Changes sequence ID parameter from -1 to 0 in two llama_memory_seq_rm calls
  • Addresses assertion failures in llama_kv_cache_unified::seq_rm that requires seq_id >= 0

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
tools/main/main.cpp Updates sequence ID parameter in memory sequence removal call
examples/lookahead/lookahead.cpp Updates sequence ID parameter in memory sequence removal call


// remove any "future" tokens that we might have inherited from the previous session
llama_memory_seq_rm(mem, -1, n_matching_session_tokens, -1);
llama_memory_seq_rm(mem, 0, n_matching_session_tokens, -1);
Copy link

Copilot AI Aug 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix changes the sequence ID from -1 to 0, but this may alter the intended behavior. Please verify that sequence ID 0 is the correct target sequence for removal, as this could affect which cached tokens are cleared.

Suggested change
llama_memory_seq_rm(mem, 0, n_matching_session_tokens, -1);
llama_memory_seq_rm(mem, -1, n_matching_session_tokens, -1);

Copilot uses AI. Check for mistakes.
// KV cache management
// if no verification token matched, we simply remove all cells from this batch -> no fragmentation
llama_memory_seq_rm(mem, -1, n_past, -1);
llama_memory_seq_rm(mem, 0, n_past, -1);
Copy link

Copilot AI Aug 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix changes the sequence ID from -1 to 0, but this may alter the intended behavior. Please verify that sequence ID 0 is the correct target sequence for removal, as this could affect which cached tokens are cleared in the lookahead algorithm.

Suggested change
llama_memory_seq_rm(mem, 0, n_past, -1);
llama_memory_seq_rm(mem, -1, n_past, -1);

Copilot uses AI. Check for mistakes.
leok7v referenced this pull request Aug 9, 2025
* llama : deprecate llama_kv_self_ API

ggml-ci

* llama : allow llama_memory_(nullptr)

ggml-ci

* memory : add flag for optional data clear in llama_memory_clear

ggml-ci
@compilade
Copy link
Collaborator

@leok7v
Did #15226 fix this problem?

@leok7v
Copy link
Author

leok7v commented Aug 12, 2025

Yes

@ggerganov
Copy link
Member

Superseded by #15226

@ggerganov ggerganov closed this Aug 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants