Skip to content

Commit 5b4e000

Browse files
committed
llama : update docs about llama_decode [no_ci]
1 parent 5923822 commit 5b4e000

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

include/llama.h

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -943,9 +943,12 @@ extern "C" {
943943
// Requires KV cache.
944944
// For encode-decoder contexts, processes the batch using the decoder.
945945
// Positive return values does not mean a fatal error, but rather a warning.
946-
// 0 - success
947-
// 1 - could not find a KV slot for the batch (try reducing the size of the batch or increase the context)
948-
// < 0 - error. the KV cache state is restored to the state before this call
946+
// 0 - success
947+
// 1 - could not find a KV slot for the batch (try reducing the size of the batch or increase the context)
948+
// the KV cache is restored to the state before this call
949+
// 2 - aborted. the KV cache is in undefined state
950+
// -1 - invalid input batch. the KV cache is unmodified
951+
// < -1 - error. the KV cache is in undefined state
949952
LLAMA_API int32_t llama_decode(
950953
struct llama_context * ctx,
951954
struct llama_batch batch);

0 commit comments

Comments
 (0)