Skip to content

Commit 9ef5d08

Browse files
committed
llama : add comments about KV cache state after error
1 parent 0638c44 commit 9ef5d08

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

include/llama.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -797,15 +797,15 @@ extern "C" {
797797
// Processes a batch of tokens with the ecoder part of the encoder-decoder model.
798798
// Stores the encoder output internally for later use by the decoder cross-attention layers.
799799
// 0 - success
800-
// < 0 - error
800+
// < 0 - error. the KV cache state is restored to the state before this call
801801
LLAMA_API int32_t llama_encode(
802802
struct llama_context * ctx,
803803
struct llama_batch batch);
804804

805805
// Positive return values does not mean a fatal error, but rather a warning.
806806
// 0 - success
807807
// 1 - could not find a KV slot for the batch (try reducing the size of the batch or increase the context)
808-
// < 0 - error
808+
// < 0 - error. the KV cache state is restored to the state before this call
809809
LLAMA_API int32_t llama_decode(
810810
struct llama_context * ctx,
811811
struct llama_batch batch);

0 commit comments

Comments
 (0)