Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ user-guide/prefix-cache/index
user-guide/sparse-attention/index
user-guide/pd-disaggregation/index
user-guide/metrics/metrics
user-guide/rerope/rerope
:::

:::{toctree}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,26 +1,34 @@
# Rectified Rotary Position Embeddings (ReRoPE)
# Rectified Rotary Position Embeddings

Using ReRoPE, we can more effectively extend the context length of LLM without the need for fine-tuning. This is about the Triton implementation of ReRoPE and its integration into the vLLM inference framework.
Using Rectified Rotary Position Embeddings (ReRoPE), we can more effectively extend the context length of LLM without the need for fine-tuning. This is about the Triton implementation of ReRoPE and its integration into the vLLM inference framework.

<div align="center">

**🚀 ReRoPE | 📄 blog [https://kexue.fm/archives/9708] [https://normxu.github.io/Rethinking-Rotary-Position-Embedding-3]**


[![License](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/ModelEngine-Group/unified-cache-management/blob/main/LICENSE)
[![Python](https://img.shields.io/badge/Python-3.10+-blue.svg)](https://python.org)

</div>

## 🌟 What is ReRoPE?

<div align="center">

<img src="https://raw.githubusercontent.com/bojone/rerope/main/idea.png" width=750>

</div>

This approach combines direct extrapolation with position interpolation. A window size $w$ is established, where a position interval of $1$ is used within the window, and an interval of $\frac{1}{k}$ is applied outside. As $k \to \infty$, this simplifies to the form illustrated above. Under this scheme, the position encoding range never exceeds $w$ regardless of input length, potentially enabling support for arbitrarily long contexts.

The attention score calculation formulas are as follows,

$$
\begin{align}
\begin{aligned}
score_{ij}^{1} &= (q_iR_i)(k_jR_j)^T, && i-j<w \\
score_{ij}^{2} &= (q_iR_w)(k_j)^T, && i-j\ge w
\end{align}
\end{aligned}
$$

ReRoPE extends context length effectively but requires double attention—local within w and global compressed—significantly reducing throughput. Despite this overhead, it remains valuable for training-free long contexts, especially when combined with local attention windows to balance efficiency.
Expand All @@ -37,7 +45,14 @@ ReRoPE extends context length effectively but requires double attention—local

## 🏆 Results

![alt text](results.png)
<div align="center">

### The Experiment Results
![ReRoPE Results](../../_static/images/rerope_performace.png)

The experiment is based on a hybrid Transformer-GAU (Gated Attention Unit) model with a size of 100M parameters. $logn$ indicates we add the scale factor $log n$⁡ at pretraining stage; $log n^{*}$ denotes we apply the scale factor to the attention matrix only for text exceeding the max sequence length, without any pretraining; $w256$ denotes the rerope windopw $w=256$.

</div>

## 🚀 Quick Start

Expand All @@ -46,12 +61,12 @@ ReRoPE extends context length effectively but requires double attention—local
For installation instructions, please refer to the UCM's top-level README. Once UCM is installed, ReRoPE is naturally supported by running the following example python scripts.

```python
export VLLM_ATTENTION_BACKEND = TRITON_ATTN_VLLM_V1
export VLLM_USE_REROPE = true
export VLLM_ATTENTION_BACKEND=TRITON_ATTN_VLLM_V1
export VLLM_USE_REROPE=true
export DATA_DIR=/home/data/kv_cache
export MODEL_PATH=/home/models/Qwen2.5-14B-Instruct
export REROPE_WINDOW = 32768
export TRAINING_LENGTH = 32768
export REROPE_WINDOW=32768
export TRAINING_LENGTH=32768

python examples/offline_inference_rerope.py
```
Expand Down
4 changes: 2 additions & 2 deletions docs/source/user-guide/sparse-attention/cacheblend.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

![blend_scheme.jpg](../../_static/images/blend_scheme.jpg)

**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper **
**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper**

[![License](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/ModelEngine-Group/unified-cache-management/blob/main/LICENSE)
[![Python](https://img.shields.io/badge/Python-3.10+-blue.svg)](https://python.org)
Expand Down Expand Up @@ -31,7 +31,7 @@ CacheBlend reduces TTFT by 2.2 ~ 3.3× and increases throughput by 2.8 ~ 5× und
1. **🔐 Chunk Hash Encoding**: Similar as prefix hash encoder, hash all blocks in each chunk from the same hash meta beginning.
2. **⚡ Combine Prefix Cache and Chunk Cache**: Since chunk cache and native prefix cache share the same hash space, ucm first performs prefix cache lookup to fetch fully reused cache and then conduct chunk cache lookup to fetch the candidate cache for blending.
3. **🎯 Delta-Rope PostProcess**: Rectify loaded chunk cache according to their position in the new request.
3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to the HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
4. **🚀 Comprehensive Hook for LLM Forward Pipeline**: Based on ucm sparse module, blend module sparse the prefill tokens not only in attention stage but also in ffn, layer stage.

## 🚀 Quick Start
Expand Down
1 change: 1 addition & 0 deletions docs/source/user-guide/sparse-attention/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,5 @@ esa
gsa
kvcomp
kvstar
cacheblend
:::
2 changes: 1 addition & 1 deletion examples/offline_inference_blend.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ def main():
# choose one data row in LongBenchV1 (wikimqa)
assert os.path.isfile(
path_to_dataset
), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/path/to/longbench/multifieldqa_zh.jsonl`"
), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/home/data/Longbench/data/2wikimqa.jsonl`"
with open(path_to_dataset, "r") as f:
lines = f.readlines()
dataset_row = json.loads(lines[0])
Expand Down
2 changes: 2 additions & 0 deletions ucm/sparse/blend/blend.py
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,8 @@ def build_sparse_meta(

def _update_attn_metadata(self):
# update attn_metadata, cause we sparse the prefill tokens
# golden kv caches are available in current blend layer, so maybe we should cache all of them
# so maybe we should modify slot_mapping at the beginning of next layer/attn
self.attn_metadata.slot_mapping = self.attn_metadata.slot_mapping[
self.blend_req_metas.compute_mask
]
Expand Down