Status: Proposed Date: 2026-01-20 Decision Makers: Ruvector Architecture Team Technical Area: LLM Inference Engine / KV Cache Optimization
Modern LLM applications exhibit highly repetitive prompt patterns that waste computational resources. Chat applications repeatedly process identical system prompts across conversations, RAG systems re-encode the same document chunks, and batch inference workloads share common instruction prefixes. Each repeated token incurs full transformer computation despite producing identical key-value (KV) cache states.
RuvLLM v2.3's KV cache implementation computes attention states from scratch for every request:
- Chat applications: System prompts (50-500 tokens) recomputed every turn → 100ms+ latency overhead
- RAG workloads: Document chunks (500-2000 tokens) re-encoded per query → 500ms+ latency overhead
- Batch inference: Shared instruction prefixes computed independently per request → Nx redundant computation
- Redundant Computation: Identical token sequences produce identical KV states but are recomputed every time
- Memory Bandwidth: Repetitive KV cache writes saturate GPU memory bandwidth
- Latency Overhead: First-token latency dominated by prefix processing (system prompt + context)
- Cache Coherence: Shared KV states across requests require careful memory management
- Prefix Matching: Efficiently identifying common prefixes across diverse prompts
Current measurements on typical workloads:
| Workload Type | Prefix Length | Redundant Computation | Latency Overhead |
|---|---|---|---|
| Chat (system prompt) | 200 tokens | 100% repeated | 100ms/turn |
| RAG (document chunks) | 1000 tokens | 80% repeated | 500ms/query |
| Batch (instruction prefix) | 50 tokens | 100% repeated | 30ms/request |
- 10x latency reduction: Chat first-token latency from 100ms to 10ms
- Memory efficiency: Share KV cache across requests via copy-on-write
- Hit rate optimization: 80%+ cache hit rate for typical workloads
- Throughput scaling: 5-10x more concurrent requests within same memory budget
- Transparent integration: No changes to existing LlmBackend API
- Model agnostic: Works with all transformer architectures
- Streaming support: Compatible with streaming token generation
- Multi-request sharing: Safe concurrent access to shared KV states
- Bounded cache size: LRU eviction prevents unbounded growth
- Copy-on-write semantics: Shared prefixes until divergence
- Memory pressure handling: Graceful degradation under memory constraints
Implement prefix caching using token sequence hashing for exact prefix matches.
Pros:
- Simple implementation: Hash token IDs → cache lookup
- Fast lookup: O(1) hash table access
- Easy to reason about: Exact prefix matching only
Cons:
- No partial matches: "Hello world" vs "Hello there" share no cache
- Hash collisions: Rare but require conflict resolution
- Limited hit rate: Only exact prefixes share cache
Implement a radix tree (trie) data structure for prefix matching, inspired by SGLang's RadixAttention algorithm.
Pros:
- Partial matches: "Hello world" and "Hello there" share "Hello" prefix
- Higher hit rate: Exploits any common prefix, not just exact matches
- Efficient storage: Common prefixes stored once
- Proven approach: SGLang demonstrates 10x speedups in production
Cons:
- Complex implementation: Radix tree with KV cache nodes
- Insertion overhead: Tree restructuring on new sequences
- Memory overhead: Tree structure metadata
Use learned representations (e.g., token embeddings) to cluster similar prefixes.
Pros:
- Semantic matching: Similar meanings share cache even with different tokens
- Adaptive: Learns from access patterns
Cons:
- Unpredictable behavior: Semantic similarity may not guarantee KV cache equivalence
- Training overhead: Requires offline training phase
- Complexity: Neural network + cache management
Chosen Option: Option B - Radix Tree with Partial Matching (SGLang RadixAttention)
Implement prefix caching using a radix tree data structure for efficient partial prefix matching with copy-on-write KV cache sharing, following the design proven by SGLang's RadixAttention.
- Maximum hit rate: Partial prefix matching exploits every common token, not just exact sequences
- Proven performance: SGLang demonstrates 10x speedups with RadixAttention in production serving
- Memory efficiency: Common prefixes stored once, shared across requests via tree structure
- Predictable behavior: Token-level matching guarantees KV cache correctness (unlike semantic approaches)
- Graceful degradation: Falls back to standard computation if cache miss
/// Radix tree-based prefix cache for KV states
pub struct PrefixCache {
/// Radix tree mapping token sequences to cached KV states
radix_tree: RadixTree<CachedPrefix>,
/// Maximum number of cached prefixes
max_entries: usize,
/// Maximum memory in bytes for cache
max_memory_bytes: usize,
/// LRU eviction policy
lru: LruCache<PrefixHash, CacheEntry>,
/// Cache statistics
stats: Arc<CacheStats>,
}
/// Cached prefix entry
pub struct CachedPrefix {
/// Token IDs for this prefix
token_ids: Vec<u32>,
/// Cached KV states (Arc for shared ownership)
kv_cache: Arc<KvCache>,
/// Hit count for LRU eviction
hit_count: AtomicU64,
/// Last access timestamp
last_access: Instant,
/// Reference count for copy-on-write
ref_count: AtomicU32,
}
/// KV cache with copy-on-write semantics
#[derive(Clone)]
pub struct KvCache {
/// Key cache: [num_layers, batch_size, num_heads, seq_len, head_dim]
keys: Arc<Tensor>,
/// Value cache: [num_layers, batch_size, num_heads, seq_len, head_dim]
values: Arc<Tensor>,
/// Sequence length
seq_len: usize,
}
/// Cache statistics
pub struct CacheStats {
pub total_lookups: AtomicU64,
pub cache_hits: AtomicU64,
pub partial_hits: AtomicU64,
pub cache_misses: AtomicU64,
pub evictions: AtomicU64,
pub memory_usage_bytes: AtomicU64,
}/// Radix tree node for efficient prefix matching
struct RadixNode {
/// Token IDs represented by this edge
edge_tokens: Vec<u32>,
/// Cached KV state if this node represents a complete prefix
cached_prefix: Option<Arc<CachedPrefix>>,
/// Child nodes
children: HashMap<u32, RadixNode>,
/// Metadata for tree balancing
metadata: NodeMetadata,
}
/// Radix tree for token sequence prefix matching
pub struct RadixTree<T> {
root: RadixNode,
node_count: usize,
max_depth: usize,
}
impl RadixTree<CachedPrefix> {
/// Find longest matching prefix for given token sequence
pub fn longest_match(&self, tokens: &[u32]) -> Option<(usize, Arc<CachedPrefix>)> {
let mut current = &self.root;
let mut matched_len = 0;
let mut last_cached = None;
for (i, &token) in tokens.iter().enumerate() {
if let Some(child) = current.children.get(&token) {
// Match child edge tokens
let edge_match_len = self.match_edge(&child.edge_tokens, &tokens[i..]);
matched_len += edge_match_len;
if edge_match_len < child.edge_tokens.len() {
// Partial edge match - stop here
break;
}
if let Some(ref cached) = child.cached_prefix {
last_cached = Some((matched_len, cached.clone()));
}
current = child;
} else {
break;
}
}
last_cached
}
/// Insert a new prefix into the tree
pub fn insert(&mut self, tokens: Vec<u32>, kv_cache: Arc<KvCache>) -> Result<()> {
// Tree insertion with edge splitting for partial matches
// ... (implementation details)
}
}impl PrefixCache {
/// Lookup cached KV states for given token sequence
///
/// Returns (prefix_length, kv_cache) where prefix_length is the number
/// of tokens that matched the cache (may be partial match)
pub fn lookup(&self, tokens: &[u32]) -> Option<(usize, Arc<KvCache>)> {
self.stats.total_lookups.fetch_add(1, Ordering::Relaxed);
match self.radix_tree.longest_match(tokens) {
Some((prefix_len, cached_prefix)) => {
// Update LRU
cached_prefix.hit_count.fetch_add(1, Ordering::Relaxed);
cached_prefix.last_access = Instant::now();
if prefix_len == tokens.len() {
self.stats.cache_hits.fetch_add(1, Ordering::Relaxed);
} else {
self.stats.partial_hits.fetch_add(1, Ordering::Relaxed);
}
Some((prefix_len, cached_prefix.kv_cache.clone()))
}
None => {
self.stats.cache_misses.fetch_add(1, Ordering::Relaxed);
None
}
}
}
/// Insert new KV cache for token sequence
pub fn insert(&mut self, tokens: Vec<u32>, kv_cache: KvCache) -> Result<()> {
// Check memory limit
if self.memory_usage() + kv_cache.size_bytes() > self.max_memory_bytes {
self.evict_lru()?;
}
let cached_prefix = Arc::new(CachedPrefix {
token_ids: tokens.clone(),
kv_cache: Arc::new(kv_cache),
hit_count: AtomicU64::new(0),
last_access: Instant::now(),
ref_count: AtomicU32::new(1),
});
self.radix_tree.insert(tokens, cached_prefix)?;
Ok(())
}
/// Evict least recently used entry
pub fn evict_lru(&mut self) -> Result<()> {
// Find LRU entry based on hit_count and last_access
// Remove from radix tree
// Update memory usage
self.stats.evictions.fetch_add(1, Ordering::Relaxed);
Ok(())
}
/// Current memory usage in bytes
pub fn memory_usage(&self) -> usize {
self.stats.memory_usage_bytes.load(Ordering::Relaxed) as usize
}
}impl LlmBackend for CandleBackend {
fn generate(&self, prompt: &str, params: GenerateParams) -> Result<String> {
// Tokenize prompt
let tokens = self.tokenizer.encode(prompt)?;
// Check prefix cache
let (cached_len, mut kv_cache) = match self.prefix_cache.lookup(&tokens) {
Some((len, cache)) => {
// Cache hit - reuse KV states for first `len` tokens
println!("Prefix cache hit: {}/{} tokens", len, tokens.len());
(len, (*cache).clone()) // Copy-on-write
}
None => {
// Cache miss - initialize empty KV cache
(0, KvCache::new(self.model.config()))
}
};
// Compute attention only for tokens after cached prefix
let start_pos = cached_len;
for pos in start_pos..tokens.len() {
let logits = self.model.forward_with_cache(
&tokens[pos..pos+1],
pos,
&mut kv_cache
)?;
}
// Cache the computed prefix for future requests
if params.cache_prefix && tokens.len() >= params.min_cache_tokens {
self.prefix_cache.insert(tokens.clone(), kv_cache.clone())?;
}
// Generate tokens
// ... (standard generation logic)
}
}/// Chat conversation with system prompt caching
pub struct ChatSession {
system_prompt: String,
system_prompt_tokens: Vec<u32>,
conversation_history: Vec<Message>,
}
impl ChatSession {
pub fn generate_response(&mut self, user_message: &str) -> Result<String> {
// System prompt is cached after first turn
let prompt = format!("{}\n{}", self.system_prompt, user_message);
// Prefix cache will reuse system prompt KV states
let response = self.backend.generate(&prompt, GenerateParams {
cache_prefix: true,
min_cache_tokens: 50,
..Default::default()
})?;
Ok(response)
}
}Expected Performance:
- First turn: 100ms (system prompt + user message)
- Subsequent turns: 10ms (only user message, system prompt cached)
- 10x speedup for multi-turn conversations
/// RAG pipeline with document chunk caching
pub struct RagPipeline {
document_chunks: Vec<DocumentChunk>,
chunk_cache_keys: HashMap<ChunkId, Vec<u32>>,
}
impl RagPipeline {
pub fn query(&self, question: &str) -> Result<String> {
// Retrieve relevant chunks
let relevant_chunks = self.retrieve_chunks(question)?;
// Build prompt with cached document chunks
let context = relevant_chunks.iter()
.map(|chunk| chunk.text.as_str())
.collect::<Vec<_>>()
.join("\n\n");
let prompt = format!(
"Context:\n{}\n\nQuestion: {}\n\nAnswer:",
context, question
);
// Prefix cache will reuse encoded document chunks
let response = self.backend.generate(&prompt, GenerateParams {
cache_prefix: true,
min_cache_tokens: 100,
..Default::default()
})?;
Ok(response)
}
}Expected Performance:
- First query with chunks: 500ms (encode 1000-token context)
- Subsequent queries with same chunks: 50ms (chunks cached)
- 10x speedup for repeated document queries
/// Batch inference with shared instruction prefix
pub struct BatchInference {
instruction_prefix: String,
instruction_tokens: Vec<u32>,
}
impl BatchInference {
pub fn batch_generate(&self, inputs: &[String]) -> Result<Vec<String>> {
inputs.par_iter()
.map(|input| {
let prompt = format!("{}\n{}", self.instruction_prefix, input);
// All requests share cached instruction prefix
self.backend.generate(&prompt, GenerateParams {
cache_prefix: true,
min_cache_tokens: 20,
..Default::default()
})
})
.collect()
}
}Expected Performance:
- N requests with shared prefix: Compute prefix once, share across all
- Nx speedup where N is batch size (for prefix portion)
| Scenario | Without Cache | With Prefix Cache | Speedup |
|---|---|---|---|
| Chat (200-token system prompt) | 100ms | 10ms | 10x |
| RAG (1000-token document chunks) | 500ms | 50ms | 10x |
| Batch (50-token instruction, 100 requests) | 1000ms | 200ms | 5x |
| Mixed workload (80% shared prefix) | 300ms | 60ms | 5x |
Expected hit rates for typical workloads:
| Workload | Exact Prefix Hit | Partial Prefix Hit | Total Hit Rate |
|---|---|---|---|
| Chat (same system prompt) | 95% | 3% | 98% |
| RAG (document corpus) | 60% | 30% | 90% |
| Batch (shared instruction) | 100% | 0% | 100% |
| Mixed production | 50% | 30% | 80% |
| Component | Memory Cost | Notes |
|---|---|---|
| Radix tree structure | ~1KB per node | Logarithmic in cache size |
| KV cache per prefix | ~4MB per 1000 tokens | 7B model, BF16 precision |
| Metadata per entry | ~200 bytes | Hit count, timestamps, etc. |
| Total overhead | ~5-10% | For typical cache sizes |
Goal: Simple prefix cache with exact matching for validation
- Implement
PrefixCachewith hash-based lookup - Integrate with
CandleBackend::generate() - Add cache hit/miss metrics
- Benchmark on chat and RAG workloads
Deliverables:
- Working prefix cache with exact matching
- Benchmark results showing 5-10x speedup for exact prefix hits
- Cache statistics (hit rate, memory usage)
Success Criteria:
- 90%+ hit rate for chat with identical system prompts
- 5x+ speedup on RAG workload with repeated chunks
- No correctness regressions
Goal: Replace hash table with radix tree for partial matches
- Implement
RadixTree<CachedPrefix>data structure - Port
PrefixCacheto use radix tree backend - Add partial prefix matching tests
- Benchmark hit rate improvement
Deliverables:
- Radix tree implementation with partial matching
- Increased hit rate (80%+ for mixed workloads)
- Performance comparison: hash vs radix tree
Success Criteria:
- Partial prefix hits improve overall hit rate by 20-30%
- Radix tree lookup overhead <1ms
- Memory overhead <10% vs hash table
Goal: Enable concurrent requests to share cached KV states safely
- Implement copy-on-write semantics for
KvCache - Add reference counting for shared KV states
- Thread-safe concurrent access to
PrefixCache - Stress test with concurrent batch inference
Deliverables:
- Thread-safe prefix cache with Arc/RwLock
- Copy-on-write KV cache cloning
- Concurrent batch inference benchmarks
Success Criteria:
- 10-100 concurrent requests share cache safely
- No data races or corruption (validated via ThreadSanitizer)
- 5x+ throughput improvement on batch workloads
Goal: Prevent unbounded cache growth with LRU eviction
- Implement LRU eviction policy based on hit count + recency
- Add memory budget limits (configurable)
- Eviction backpressure and monitoring
- Tune eviction parameters for production workloads
Deliverables:
- LRU eviction with configurable memory limits
- Eviction metrics and monitoring
- Production-ready cache configuration
Success Criteria:
- Cache memory stays within configured limit
- Eviction rate <10% for typical workloads
- No thrashing (evict/reload cycles)
- 10x latency reduction: Chat and RAG applications see dramatic first-token latency improvements
- Higher throughput: More concurrent requests fit in same GPU memory via shared KV states
- Memory efficiency: Common prefixes stored once, not duplicated per request
- Transparent integration: No API changes required for existing applications
- Production validation: SGLang demonstrates real-world effectiveness of RadixAttention approach
- Implementation complexity: Radix tree + copy-on-write adds significant code complexity
- Memory overhead: Cache structure and metadata consume 5-10% additional memory
- Eviction tuning: LRU parameters require workload-specific tuning for optimal hit rates
- Debugging difficulty: Shared mutable state (KV cache) increases debugging complexity
- Edge cases: Rare token sequences may thrash cache with low hit rates
- Workload dependency: Benefit proportional to prefix repetition (high for chat/RAG, low for diverse prompts)
- Configuration surface: New cache parameters (max_entries, max_memory_bytes) require tuning
- Monitoring requirements: Cache hit rates and memory usage require observability infrastructure
| Risk | Mitigation |
|---|---|
| Radix tree bugs | Comprehensive property-based testing with proptest |
| Memory leaks | RAII guards, reference counting validation |
| Cache thrashing | Adaptive eviction based on hit rate monitoring |
| Correctness issues | Extensive unit tests comparing cached vs non-cached outputs |
| Performance regression | Benchmark suite in CI with performance budgets |
- Rejected: vLLM's approach requires Python runtime; we need Rust-native solution
- Consideration: Algorithm insights inform our radix tree design
- Rejected: Semantic similarity doesn't guarantee KV cache equivalence; risks correctness
- Consideration: Future extension for approximate caching with user opt-in
- Rejected: Fixed-size blocks waste memory for variable-length prefixes
- Consideration: Hybrid approach with block-aligned radix tree could reduce fragmentation
- ADR-004: KV Cache Management (foundational KV cache design)
- ADR-006: Memory Management (memory allocation strategies)
- ADR-008: mistral-rs Integration (PagedAttention integration)
- ADR-010: Flash Attention Integration (attention computation optimizations)
- No changes to
LlmBackendtrait API - Prefix caching enabled via
GenerateParams::cache_prefixflag - Backward compatible: cache can be disabled for debugging
- Unit tests for radix tree insert/lookup operations
- Property-based tests for cache correctness
- Benchmark suite comparing cached vs non-cached performance
- Concurrent stress tests for thread safety
- Memory leak detection via Valgrind/AddressSanitizer
- Prefix cache configuration guide
- Performance tuning recommendations
- Cache hit rate monitoring examples
- Troubleshooting guide for low hit rates
- SGLang RadixAttention Paper: "Efficient LLM Serving with RadixAttention" (https://arxiv.org/abs/2312.17238)
- vLLM Prefix Caching: Automatic Prefix Caching documentation (https://docs.vllm.ai/en/latest/automatic_prefix_caching.html)
- Radix Tree Implementation: Rust radix_trie crate (https://docs.rs/radix_trie/)
- PagedAttention Paper: "Efficient Memory Management for Large Language Model Serving with PagedAttention" (vLLM)
- KV Cache Optimization: "Fast Transformer Decoding: One Write-Head is All You Need" (Multi-Query Attention)
- Copy-on-Write Patterns: Arc/Cow documentation (https://doc.rust-lang.org/std/sync/struct.Arc.html)
| Component | Status | Notes |
|---|---|---|
PrefixCache struct |
Pending | Core cache structure |
| Hash-based lookup | Pending | Phase 1 - exact matching |
RadixTree implementation |
Pending | Phase 2 - partial matching |
KvCache copy-on-write |
Pending | Phase 3 - shared state |
| LRU eviction | Pending | Phase 4 - memory management |
Integration with CandleBackend |
Pending | Wire to generate() |
| Thread safety (Arc/RwLock) | Pending | Concurrent access |
| Benchmarks | Pending | Chat, RAG, batch workloads |
| Documentation | Pending | Configuration guide |
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2026-01-20 | Ruvector Architecture Team | Initial proposal |