llama: Add generic abort to token_decode_internal #10571
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Mirrored from the commit message:
Aborting a generation is required if a user wants to decode requests sequentially. Otherwise there is a segfault for the second request because the first request is not done yet.
Fortunately, llama.cpp already has a callback to check if a user has aborted with token decode. However, this is only used in the GGML backend for CPU and Metal. Other backends such as CUDA are out of luck.
Therefore, add a backend agnostic check that occurs per batch. This allows users to cancel their requests without having to wait for the entire prompt processing operation to finish.
An example test is trying to decode an 8000 token prompt with a batch of 2048 and aborting. In this case, the abort will be faster since it's being checked every batch instead of after 8000 tokens.
Temporarily solves #10509 and may be related to #6421