diff --git a/docs/source/user-guide/triton-rerope/results.png b/docs/source/_static/images/rerope_performace.png
similarity index 100%
rename from docs/source/user-guide/triton-rerope/results.png
rename to docs/source/_static/images/rerope_performace.png
diff --git a/docs/source/index.md b/docs/source/index.md
index b494115bb..6469ec6e5 100644
--- a/docs/source/index.md
+++ b/docs/source/index.md
@@ -57,6 +57,7 @@ user-guide/prefix-cache/index
user-guide/sparse-attention/index
user-guide/pd-disaggregation/index
user-guide/metrics/metrics
+user-guide/rerope/rerope
:::
:::{toctree}
diff --git a/docs/source/user-guide/triton-rerope/rerope.md b/docs/source/user-guide/rerope/rerope.md
similarity index 74%
rename from docs/source/user-guide/triton-rerope/rerope.md
rename to docs/source/user-guide/rerope/rerope.md
index 3cf7f2c3c..91f0142cc 100644
--- a/docs/source/user-guide/triton-rerope/rerope.md
+++ b/docs/source/user-guide/rerope/rerope.md
@@ -1,26 +1,34 @@
-# Rectified Rotary Position Embeddings (ReRoPE)
+# Rectified Rotary Position Embeddings
-Using ReRoPE, we can more effectively extend the context length of LLM without the need for fine-tuning. This is about the Triton implementation of ReRoPE and its integration into the vLLM inference framework.
+Using Rectified Rotary Position Embeddings (ReRoPE), we can more effectively extend the context length of LLM without the need for fine-tuning. This is about the Triton implementation of ReRoPE and its integration into the vLLM inference framework.
+
+
**🚀 ReRoPE | 📄 blog [https://kexue.fm/archives/9708] [https://normxu.github.io/Rethinking-Rotary-Position-Embedding-3]**
+
[](https://github.com/ModelEngine-Group/unified-cache-management/blob/main/LICENSE)
[](https://python.org)
+
## 🌟 What is ReRoPE?
+
+

+
+
This approach combines direct extrapolation with position interpolation. A window size $w$ is established, where a position interval of $1$ is used within the window, and an interval of $\frac{1}{k}$ is applied outside. As $k \to \infty$, this simplifies to the form illustrated above. Under this scheme, the position encoding range never exceeds $w$ regardless of input length, potentially enabling support for arbitrarily long contexts.
The attention score calculation formulas are as follows,
$$
-\begin{align}
+\begin{aligned}
score_{ij}^{1} &= (q_iR_i)(k_jR_j)^T, && i-j
+
+### The Experiment Results
+
+
+The experiment is based on a hybrid Transformer-GAU (Gated Attention Unit) model with a size of 100M parameters. $logn$ indicates we add the scale factor $log n$ at pretraining stage; $log n^{*}$ denotes we apply the scale factor to the attention matrix only for text exceeding the max sequence length, without any pretraining; $w256$ denotes the rerope windopw $w=256$.
+
+
## 🚀 Quick Start
@@ -46,12 +61,12 @@ ReRoPE extends context length effectively but requires double attention—local
For installation instructions, please refer to the UCM's top-level README. Once UCM is installed, ReRoPE is naturally supported by running the following example python scripts.
```python
-export VLLM_ATTENTION_BACKEND = TRITON_ATTN_VLLM_V1
-export VLLM_USE_REROPE = true
+export VLLM_ATTENTION_BACKEND=TRITON_ATTN_VLLM_V1
+export VLLM_USE_REROPE=true
export DATA_DIR=/home/data/kv_cache
export MODEL_PATH=/home/models/Qwen2.5-14B-Instruct
-export REROPE_WINDOW = 32768
-export TRAINING_LENGTH = 32768
+export REROPE_WINDOW=32768
+export TRAINING_LENGTH=32768
python examples/offline_inference_rerope.py
```
diff --git a/docs/source/user-guide/sparse-attention/cacheblend.md b/docs/source/user-guide/sparse-attention/cacheblend.md
index 0f5d8e819..f95f3d359 100644
--- a/docs/source/user-guide/sparse-attention/cacheblend.md
+++ b/docs/source/user-guide/sparse-attention/cacheblend.md
@@ -3,7 +3,7 @@

-**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper **
+**🚀 Knowledge Cached Fusion Algorithm | 📄 EuroSys 2025 Paper**
[](https://github.com/ModelEngine-Group/unified-cache-management/blob/main/LICENSE)
[](https://python.org)
@@ -31,7 +31,7 @@ CacheBlend reduces TTFT by 2.2 ~ 3.3× and increases throughput by 2.8 ~ 5× und
1. **🔐 Chunk Hash Encoding**: Similar as prefix hash encoder, hash all blocks in each chunk from the same hash meta beginning.
2. **⚡ Combine Prefix Cache and Chunk Cache**: Since chunk cache and native prefix cache share the same hash space, ucm first performs prefix cache lookup to fetch fully reused cache and then conduct chunk cache lookup to fetch the candidate cache for blending.
3. **🎯 Delta-Rope PostProcess**: Rectify loaded chunk cache according to their position in the new request.
-3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
+3. **🔍 Integrate Cache Blend and First Token Generation**: Construct compute mask and attention meta according to the HKVD tokens, cache miss tokens and suffix tokens, then compute their kv cache in a single model forward stage.
4. **🚀 Comprehensive Hook for LLM Forward Pipeline**: Based on ucm sparse module, blend module sparse the prefill tokens not only in attention stage but also in ffn, layer stage.
## 🚀 Quick Start
diff --git a/docs/source/user-guide/sparse-attention/index.md b/docs/source/user-guide/sparse-attention/index.md
index 6c1f3d209..822917604 100644
--- a/docs/source/user-guide/sparse-attention/index.md
+++ b/docs/source/user-guide/sparse-attention/index.md
@@ -41,4 +41,5 @@ esa
gsa
kvcomp
kvstar
+cacheblend
:::
diff --git a/examples/offline_inference_blend.py b/examples/offline_inference_blend.py
index 0de105f55..bdc2b211b 100644
--- a/examples/offline_inference_blend.py
+++ b/examples/offline_inference_blend.py
@@ -186,7 +186,7 @@ def main():
# choose one data row in LongBenchV1 (wikimqa)
assert os.path.isfile(
path_to_dataset
- ), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/path/to/longbench/multifieldqa_zh.jsonl`"
+ ), f"Incorrect dataset path. Please specify the dataset path by `export DATASET_PATH=/home/data/Longbench/data/2wikimqa.jsonl`"
with open(path_to_dataset, "r") as f:
lines = f.readlines()
dataset_row = json.loads(lines[0])
diff --git a/ucm/sparse/blend/blend.py b/ucm/sparse/blend/blend.py
index c2d5380f3..32b975b6d 100644
--- a/ucm/sparse/blend/blend.py
+++ b/ucm/sparse/blend/blend.py
@@ -189,6 +189,8 @@ def build_sparse_meta(
def _update_attn_metadata(self):
# update attn_metadata, cause we sparse the prefill tokens
+ # golden kv caches are available in current blend layer, so maybe we should cache all of them
+ # so maybe we should modify slot_mapping at the beginning of next layer/attn
self.attn_metadata.slot_mapping = self.attn_metadata.slot_mapping[
self.blend_req_metas.compute_mask
]