1+ <p align =" center " >
2+ <img src =" media/logo-small.png " alt =" multicache logo " width =" 200 " >
3+ </p >
4+
15# multicache
26
3- multicache is an absurdly fast multi-threaded multi-tiered in-memory cache library for Go -- it offers higher performance than any other option ever created for the language.
7+ [ ![ CI] ( https://github.com/codeGROOVE-dev/multicache/actions/workflows/ci.yml/badge.svg )] ( https://github.com/codeGROOVE-dev/multicache/actions/workflows/ci.yml )
8+ [ ![ Go Report Card] ( https://goreportcard.com/badge/github.com/codeGROOVE-dev/multicache )] ( https://goreportcard.com/report/github.com/codeGROOVE-dev/multicache )
9+ [ ![ Go Reference] ( https://pkg.go.dev/badge/github.com/codeGROOVE-dev/multicache.svg )] ( https://pkg.go.dev/github.com/codeGROOVE-dev/multicache )
10+ [ ![ codecov] ( https://codecov.io/gh/codeGROOVE-dev/multicache/graph/badge.svg )] ( https://codecov.io/gh/codeGROOVE-dev/multicache )
11+ [ ![ Release] ( https://img.shields.io/github/v/release/codeGROOVE-dev/multicache )] ( https://github.com/codeGROOVE-dev/multicache/releases )
12+ [ ![ License] ( https://img.shields.io/badge/License-Apache_2.0-blue.svg )] ( https://opensource.org/licenses/Apache-2.0 )
13+
14+ multicache is the most well-rounded cache implementation for Go today.
415
5- It offers optional persistence with compression, and has been specifically optimized for Cloud Compute environments where the process is periodically restarted, such as Kubernetes or Google Cloud Run.
16+ Designed for real-world applications in unstable environments, it has a higher average hit rate, higher throughput, and lower latency for production workloads than any other cache. To deal with process eviction in environments like Kubernetes, Cloud Run, or Borg, it also offers an optional persistence tier.
617
718## Install
819
@@ -24,8 +35,8 @@ With persistence:
2435store , _ := localfs.New [string , User ](" myapp" , " " )
2536cache , _ := multicache.NewTiered (store)
2637
27- cache.Set (ctx, " user:123" , user) // sync write
28- cache.SetAsync (ctx, " user:456" , user) // async write
38+ _ = cache.Set (ctx, " user:123" , user) // sync write
39+ _ = cache.SetAsync (ctx, " user:456" , user) // async write
2940```
3041
3142GetSet deduplicates concurrent loads to prevent thundering herd situations:
@@ -62,14 +73,14 @@ multicache has been exhaustively tested for performance using [gocachemark](http
6273
6374Where multicache wins:
6475
65- - ** Throughput** : 954M int gets/sec at 16 threads (2.2X faster than otter). 140M string sets/sec (9X faster than otter).
66- - ** Hit rate** : Wins 7 of 9 workloads. Highest average across all datasets (+2.9 % vs otter, +0.9% vs sieve).
67- - ** Latency** : 8ns int gets, 10ns string gets, zero allocations (4X lower latency than otter)
76+ - ** Throughput** : 551M int gets/sec avg (2.4X faster than otter). 89M string sets/sec avg (27X faster than otter).
77+ - ** Hit rate** : Wins 6 of 9 workloads. Highest average across all datasets (+2.7 % vs otter, +0.9% vs sieve).
78+ - ** Latency** : 8ns int gets, 10ns string gets, zero allocations (3.5X lower latency than otter)
6879
6980Where others win:
7081
71- - ** Memory** : freelru and otter use less memory per entry (73 bytes/item overhead vs 15 for otter)
72- - ** Specific workloads** : clock +0.07 % on ibm-docker, theine +0.34 % on zipf
82+ - ** Memory** : freelru and otter use less memory per entry (73 bytes/item overhead vs 14 for otter)
83+ - ** Specific workloads** : sieve +0.5% on thesios-block, clock +0.1 % on ibm-docker, theine +0.6 % on zipf
7384
7485Much of the credit for high throughput goes to [ puzpuzpuz/xsync] ( https://github.com/puzpuzpuz/xsync ) and its lock-free data structures.
7586
@@ -81,11 +92,11 @@ multicache uses [S3-FIFO](https://s3fifo.com/), which features three queues: sma
8192
8293multicache has been hyper-tuned for high performance, and deviates from the original paper in a handful of ways:
8394
84- - ** Tuned small queue** - 90 % vs paper's 10%, tuned via binary search to maximize average hit rate across 9 production traces
95+ - ** Tuned small queue** - 13.7 % vs paper's 10%, tuned via binary search to maximize average hit rate across 9 production traces
8596- ** Full ghost frequency restoration** - returning keys restore 100% of their previous access count
86- - ** Reduced frequency cap** - max freq=2 vs paper's 3, tuned via binary search for best average hit rate
87- - ** Hot item demotion ** - items accessed at least once ( peakFreq≥1 ) get demoted to small queue instead of evicted
88- - ** Extended ghost capacity** - 8x cache size for ghost tracking, tuned via binary search
97+ - ** Increased frequency cap** - max freq=5 vs paper's 3, tuned via binary search for best average hit rate
98+ - ** Death row ** - hot items (high peakFreq) get a second chance before eviction
99+ - ** Extended ghost capacity** - 1.22x cache size for ghost tracking, tuned via binary search
89100- ** Ghost frequency ring buffer** - fixed-size 256-entry ring replaces map allocations
90101
91102## License
0 commit comments