Skip to content

Conversation

@danbev
Copy link
Member

@danbev danbev commented Feb 13, 2025

This commit updates the comment in llama_kv_cache.h to reflect the change of the function name from llama_decode_internal to llama_decode_impl.

This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.
@ggerganov ggerganov merged commit 3e69319 into ggml-org:master Feb 13, 2025
1 check passed
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.
orca-zhang pushed a commit to orca-zhang/llama.cpp that referenced this pull request Feb 26, 2025
This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
This commit updates the comment in llama_kv_cache.h to reflect the
change of the function name from llama_decode_internal to
llama_decode_impl.
@danbev danbev deleted the kv-cache-decode-internal-comment branch August 13, 2025 09:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants