1- # sfcache - Stupid Fast Cache
1+ # multicache - Stupid Fast Cache
22
3- <img src =" media/logo-small.png " alt =" sfcache logo" width =" 256 " >
3+ <img src =" media/logo-small.png " alt =" multicache logo" width =" 256 " >
44
5- [ ![ Go Reference] ( https://pkg.go.dev/badge/github.com/codeGROOVE-dev/sfcache .svg )] ( https://pkg.go.dev/github.com/codeGROOVE-dev/sfcache )
6- [ ![ Go Report Card] ( https://goreportcard.com/badge/github.com/codeGROOVE-dev/sfcache )] ( https://goreportcard.com/report/github.com/codeGROOVE-dev/sfcache )
5+ [ ![ Go Reference] ( https://pkg.go.dev/badge/github.com/codeGROOVE-dev/multicache .svg )] ( https://pkg.go.dev/github.com/codeGROOVE-dev/multicache )
6+ [ ![ Go Report Card] ( https://goreportcard.com/badge/github.com/codeGROOVE-dev/multicache )] ( https://goreportcard.com/report/github.com/codeGROOVE-dev/multicache )
77[ ![ License] ( https://img.shields.io/badge/License-Apache%202.0-blue.svg )] ( https://opensource.org/licenses/Apache-2.0 )
88
99<br clear =" right " >
1010
11- sfcache is the fastest in-memory cache for Go. Need multi-tier persistence? We have it. Need thundering herd protection? We've got that too.
11+ multicache is the fastest in-memory cache for Go. Need multi-tier persistence? We have it. Need thundering herd protection? We've got that too.
1212
1313Designed for persistently caching API requests in an unreliable environment, this cache has an abundance of production-ready features:
1414
@@ -33,10 +33,10 @@ Designed for persistently caching API requests in an unreliable environment, thi
3333As a stupid-fast in-memory cache:
3434
3535``` go
36- import " github.com/codeGROOVE-dev/sfcache "
36+ import " github.com/codeGROOVE-dev/multicache "
3737
3838// strings as keys, ints as values
39- cache := sfcache .New [string , int ]()
39+ cache := multicache .New [string , int ]()
4040cache.Set (" answer" , 42 )
4141val , found := cache.Get (" answer" )
4242```
@@ -45,12 +45,12 @@ Or as a multi-tier cache with local persistence to survive restarts:
4545
4646``` go
4747import (
48- " github.com/codeGROOVE-dev/sfcache "
49- " github.com/codeGROOVE-dev/sfcache /pkg/store/localfs"
48+ " github.com/codeGROOVE-dev/multicache "
49+ " github.com/codeGROOVE-dev/multicache /pkg/store/localfs"
5050)
5151
5252p , _ := localfs.New [string , User ](" myapp" , " " )
53- cache , _ := sfcache .NewTiered (p)
53+ cache , _ := multicache .NewTiered (p)
5454
5555cache.SetAsync (ctx, " user:123" , user) // Don't wait for the key to persist
5656cache.Store .Len (ctx) // Access persistence layer directly
@@ -59,29 +59,29 @@ cache.Store.Len(ctx) // Access persistence layer directly
5959With S2 compression (fast, good ratio):
6060
6161``` go
62- import " github.com/codeGROOVE-dev/sfcache /pkg/store/compress"
62+ import " github.com/codeGROOVE-dev/multicache /pkg/store/compress"
6363
6464p , _ := localfs.New [string , User ](" myapp" , " " , compress.S2 ())
6565```
6666
6767How about a persistent cache suitable for Cloud Run or local development? This uses Cloud DataStore if available, local files if not:
6868
6969``` go
70- import " github.com/codeGROOVE-dev/sfcache /pkg/store/cloudrun"
70+ import " github.com/codeGROOVE-dev/multicache /pkg/store/cloudrun"
7171
7272p , _ := cloudrun.New [string , User ](ctx, " myapp" )
73- cache , _ := sfcache .NewTiered (p)
73+ cache , _ := multicache .NewTiered (p)
7474```
7575
7676## Performance against the Competition
7777
78- sfcache prioritizes high hit-rates and low read latency. We have our own built in ` make bench ` that asserts cache dominance:
78+ multicache prioritizes high hit-rates and low read latency. We have our own built in ` make bench ` that asserts cache dominance:
7979
8080```
8181>>> TestLatencyNoEviction: Latency - No Evictions (Set cycles within cache size) (go test -run=TestLatencyNoEviction -v)
8282| Cache | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
8383|---------------|-----------|----------|------------|-----------|----------|------------|
84- | sfcache | 7.0 | 0 | 0 | 23.0 | 0 | 0 |
84+ | multicache | 7.0 | 0 | 0 | 23.0 | 0 | 0 |
8585| lru | 23.0 | 0 | 0 | 23.0 | 0 | 0 |
8686| ristretto | 28.0 | 13 | 0 | 77.0 | 118 | 3 |
8787| otter | 34.0 | 0 | 0 | 160.0 | 51 | 1 |
@@ -94,7 +94,7 @@ sfcache prioritizes high hit-rates and low read latency. We have our own built i
9494>>> TestLatencyWithEviction: Latency - With Evictions (Set uses 20x unique keys) (go test -run=TestLatencyWithEviction -v)
9595| Cache | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
9696|---------------|-----------|----------|------------|-----------|----------|------------|
97- | sfcache | 7.0 | 0 | 0 | 94.0 | 0 | 0 |
97+ | multicache | 7.0 | 0 | 0 | 94.0 | 0 | 0 |
9898| lru | 24.0 | 0 | 0 | 83.0 | 80 | 1 |
9999| ristretto | 31.0 | 14 | 0 | 73.0 | 119 | 3 |
100100| otter | 34.0 | 0 | 0 | 176.0 | 61 | 1 |
@@ -110,7 +110,7 @@ sfcache prioritizes high hit-rates and low read latency. We have our own built i
110110
111111| Cache | QPS |
112112|---------------|------------|
113- | sfcache | 100.26M |
113+ | multicache | 100.26M |
114114| lru | 44.58M |
115115| tinylfu | 18.42M |
116116| freecache | 14.07M |
@@ -125,7 +125,7 @@ sfcache prioritizes high hit-rates and low read latency. We have our own built i
125125
126126| Cache | QPS |
127127|---------------|------------|
128- | sfcache | 36.46M |
128+ | multicache | 36.46M |
129129| freecache | 15.00M |
130130| ristretto | 13.47M |
131131| otter | 10.75M |
@@ -140,7 +140,7 @@ sfcache prioritizes high hit-rates and low read latency. We have our own built i
140140
141141| Cache | 50K cache | 100K cache |
142142|---------------|-----------|------------|
143- | sfcache | 71.16% | 78.30% |
143+ | multicache | 71.16% | 78.30% |
144144| otter | 41.12% | 56.34% |
145145| ristretto | 40.35% | 48.99% |
146146| tinylfu | 53.70% | 54.79% |
@@ -155,7 +155,7 @@ sfcache prioritizes high hit-rates and low read latency. We have our own built i
155155
156156| Cache | Size=1% | Size=2.5% | Size=5% |
157157|---------------|---------|-----------|---------|
158- | sfcache | 63.80% | 68.71% | 71.84% |
158+ | multicache | 63.80% | 68.71% | 71.84% |
159159| otter | 61.77% | 67.67% | 71.33% |
160160| ristretto | 34.91% | 41.23% | 46.58% |
161161| tinylfu | 63.83% | 68.25% | 71.56% |
@@ -171,22 +171,31 @@ Want even more comprehensive benchmarks? See https://github.com/tstromberg/gocac
171171
172172### Differences from the S3-FIFO paper
173173
174- sfcache implements the core S3-FIFO algorithm (Small/Main/Ghost queues with frequency-based promotion) with these optimizations:
174+ multicache implements the core S3-FIFO algorithm (Small/Main/Ghost queues with frequency-based promotion) with these optimizations:
175175
1761761 . ** Dynamic Sharding** - 1-2048 independent S3-FIFO shards (vs single-threaded) for concurrent workloads
1771772 . ** Bloom Filter Ghosts** - Two rotating Bloom filters track evicted keys (vs storing actual keys), reducing memory 10-100x
1781783 . ** Lazy Ghost Checks** - Only check ghosts when evicting, saving 5-9% latency when cache isn't full
1791794 . ** Intrusive Lists** - Embed pointers in entries (vs separate nodes) for zero-allocation queue ops
1801805 . ** Fast-path Hashing** - Specialized for ` int ` /` string ` keys using wyhash and bit mixing
181181
182- ### Adaptive Enhancements
182+ ### Adaptive Mode Detection
183183
184- Beyond the core algorithm, sfcache includes optimizations discovered through benchmarking :
184+ multicache automatically detects workload characteristics and adjusts its eviction strategy using ghost hit rate (how often evicted keys are re-requested) :
185185
186- - ** Scan Detection** - If ghost hit rate <5%, switches to pure recency mode (matches Clock on scan-heavy traces)
187- - ** Adaptive Queue Sizing** - Larger small queue (20%) for small caches, paper's 10% for large
188- - ** Ghost Boost** - Returning items start with freq=1 instead of 0
189- - ** Pressure-Aware Promotion** - Lowers threshold when small queue >80% full
186+ | Mode | Ghost Rate | Strategy | Best For |
187+ | ------| ------------| ----------| ----------|
188+ | 0 | <1% | Pure recency, skip ghost tracking | Scan-heavy workloads |
189+ | 1 | 1-6% or 13-22% | Balanced, promote if freq > 0 | Mixed workloads |
190+ | 2 | 7-12% | Frequency-heavy, promote if freq > 1 | Frequency-skewed workloads |
191+ | 3 | ≥23% | Clock-like, all items to main with second-chance | High-recency workloads |
192+
193+ Mode 2 uses ** hysteresis** to prevent oscillation: entry requires 7-12% ghost rate, but stays active while rate is 5-22%.
194+
195+ ### Other Optimizations
196+
197+ - ** Adaptive Queue Sizing** - Small queue is 20% for caches ≤32K, 15% for ≤128K, 10% for larger (paper recommends 10%)
198+ - ** Ghost Frequency Boost** - Items returning from ghost start with freq=1 instead of 0
190199- ** Higher Frequency Cap** - Max freq=7 (vs 3 in paper) for better hot/warm discrimination
191200
192201## License
0 commit comments