We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 9822f2c commit 3db4cb0Copy full SHA for 3db4cb0
src/llama-kv-cache.cpp
@@ -761,6 +761,7 @@ ggml_tensor * llama_kv_cache_unified::build_rope_shift(
761
// @ngxson : this is a workaround
762
// for M-RoPE, we want to rotate the whole vector when doing KV shift
763
// a normal RoPE should work, we just need to use the correct ordering
764
+ // ref: https://github.com/ggml-org/llama.cpp/pull/13870
765
? LLAMA_ROPE_TYPE_NEOX
766
: hparams.rope_type;
767
0 commit comments