Skip to content

Commit 313ca73

Browse files
xinSky00wuhuxiao
andauthored
Modify blend and rerope docs (#593)
CacheBlend : Insert table of contents and modify comments ReRoPE: Revise the format and export error of rerope documentation, and add web linking features --------- Co-authored-by: wuhuxiao <[email protected]>
1 parent 1dad118 commit 313ca73

File tree

7 files changed

+31
-12
lines changed

7 files changed

+31
-12
lines changed
File renamed without changes.

docs/source/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ user-guide/prefix-cache/index
5757
user-guide/sparse-attention/index
5858
user-guide/pd-disaggregation/index
5959
user-guide/metrics/metrics
60+
user-guide/rerope/rerope
6061
:::
6162

6263
:::{toctree}

docs/source/user-guide/triton-rerope/rerope.md renamed to docs/source/user-guide/rerope/rerope.md

Lines changed: 24 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,34 @@
1-
# Rectified Rotary Position Embeddings (ReRoPE)
1+
# Rectified Rotary Position Embeddings
22

3-
Using ReRoPE, we can more effectively extend the context length of LLM without the need for fine-tuning. This is about the Triton implementation of ReRoPE and its integration into the vLLM inference framework.
3+
Using Rectified Rotary Position Embeddings (ReRoPE), we can more effectively extend the context length of LLM without the need for fine-tuning. This is about the Triton implementation of ReRoPE and its integration into the vLLM inference framework.
4+
5+
<div align="center">
46

57
**🚀 ReRoPE | 📄 blog [https://kexue.fm/archives/9708] [https://normxu.github.io/Rethinking-Rotary-Position-Embedding-3]**
68

9+
710
[![License](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/ModelEngine-Group/unified-cache-management/blob/main/LICENSE)
811
[![Python](https://img.shields.io/badge/Python-3.10+-blue.svg)](https://python.org)
912

13+
</div>
1014

1115
## 🌟 What is ReRoPE?
1216

17+
<div align="center">
18+
1319
<img src="https://raw.githubusercontent.com/bojone/rerope/main/idea.png" width=750>
1420

21+
</div>
22+
1523
This approach combines direct extrapolation with position interpolation. A window size $w$ is established, where a position interval of $1$ is used within the window, and an interval of $\frac{1}{k}$ is applied outside. As $k \to \infty$, this simplifies to the form illustrated above. Under this scheme, the position encoding range never exceeds $w$ regardless of input length, potentially enabling support for arbitrarily long contexts.
1624

1725
The attention score calculation formulas are as follows,
1826

1927
$$
20-
\begin{align}
28+
\begin{aligned}
2129
score_{ij}^{1} &= (q_iR_i)(k_jR_j)^T, && i-j<w \\
2230
score_{ij}^{2} &= (q_iR_w)(k_j)^T, && i-j\ge w
23-
\end{align}
31+
\end{aligned}
2432
$$
2533

2634
ReRoPE extends context length effectively but requires double attention—local within w and global compressed—significantly reducing throughput. Despite this overhead, it remains valuable for training-free long contexts, especially when combined with local attention windows to balance efficiency.
@@ -37,7 +45,14 @@ ReRoPE extends context length effectively but requires double attention—local
3745

3846
## 🏆 Results
3947

40-
![alt text](results.png)
48+
<div align="center">
49+
50+
### The Experiment Results
51+
![ReRoPE Results](../../_static/images/rerope_performace.png)
52+
53+
The experiment is based on a hybrid Transformer-GAU (Gated Attention Unit) model with a size of 100M parameters. $logn$ indicates we add the scale factor $log n$⁡ at pretraining stage; $log n^{*}$ denotes we apply the scale factor to the attention matrix only for text exceeding the max sequence length, without any pretraining; $w256$ denotes the rerope windopw $w=256$.
54+
55+
</div>
4156

4257
## 🚀 Quick Start
4358

@@ -46,12 +61,12 @@ ReRoPE extends context length effectively but requires double attention—local
4661
For installation instructions, please refer to the UCM's top-level README. Once UCM is installed, ReRoPE is naturally supported by running the following example python scripts.
4762

4863
```python
49-
export VLLM_ATTENTION_BACKEND = TRITON_ATTN_VLLM_V1
50-
export VLLM_USE_REROPE = true
64+
export VLLM_ATTENTION_BACKEND=TRITON_ATTN_VLLM_V1
65+
export VLLM_USE_REROPE=true
5166
export DATA_DIR=/home/data/kv_cache
5267
export MODEL_PATH=/home/models/Qwen2.5-14B-Instruct
53-
export REROPE_WINDOW = 32768
54-
export TRAINING_LENGTH = 32768
68+
export REROPE_WINDOW=32768
69+
export TRAINING_LENGTH=32768
5570

5671
python examples/offline_inference_rerope.py
5772
```

docs/source/user-guide/sparse-attention/cacheblend.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
![blend_scheme.jpg](../../_static/images/blend_scheme.jpg)
55

6-
**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper **
6+
**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper**
77

88
[![License](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/ModelEngine-Group/unified-cache-management/blob/main/LICENSE)
99
[![Python](https://img.shields.io/badge/Python-3.10+-blue.svg)](https://python.org)
@@ -31,7 +31,7 @@ CacheBlend reduces TTFT by 2.2 ~ 3.3× and increases throughput by 2.8 ~ 5× und
3131
1. **🔐 Chunk Hash Encoding**: Similar as prefix hash encoder, hash all blocks in each chunk from the same hash meta beginning.
3232
2. **⚡ Combine Prefix Cache and Chunk Cache**: Since chunk cache and native prefix cache share the same hash space, ucm first performs prefix cache lookup to fetch fully reused cache and then conduct chunk cache lookup to fetch the candidate cache for blending.
3333
3. **🎯 Delta-Rope PostProcess**: Rectify loaded chunk cache according to their position in the new request.
34-
3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
34+
3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to the HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
3535
4. **🚀 Comprehensive Hook for LLM Forward Pipeline**: Based on ucm sparse module, blend module sparse the prefill tokens not only in attention stage but also in ffn, layer stage.
3636

3737
## 🚀 Quick Start

docs/source/user-guide/sparse-attention/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,4 +41,5 @@ esa
4141
gsa
4242
kvcomp
4343
kvstar
44+
cacheblend
4445
:::

examples/offline_inference_blend.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ def main():
186186
# choose one data row in LongBenchV1 (wikimqa)
187187
assert os.path.isfile(
188188
path_to_dataset
189-
), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/path/to/longbench/multifieldqa_zh.jsonl`"
189+
), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/home/data/Longbench/data/2wikimqa.jsonl`"
190190
with open(path_to_dataset, "r") as f:
191191
lines = f.readlines()
192192
dataset_row = json.loads(lines[0])

ucm/sparse/blend/blend.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -189,6 +189,8 @@ def build_sparse_meta(
189189

190190
def _update_attn_metadata(self):
191191
# update attn_metadata, cause we sparse the prefill tokens
192+
# golden kv caches are available in current blend layer, so maybe we should cache all of them
193+
# so maybe we should modify slot_mapping at the beginning of next layer/attn
192194
self.attn_metadata.slot_mapping = self.attn_metadata.slot_mapping[
193195
self.blend_req_metas.compute_mask
194196
]

0 commit comments

Comments
 (0)