Skip to content

Commit 4417f4a

Browse files
Thomas StrombergThomas Stromberg
authored andcommitted
rebrand bdcache -> sfcache
1 parent a2ae149 commit 4417f4a

28 files changed

+222
-268
lines changed

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ bench:
1919
# 4. Zipf Throughput (1 thread)
2020
# 5. Zipf Throughput (16 threads)
2121
benchmark:
22-
@echo "=== bdcache Benchmark Suite ==="
22+
@echo "=== sfcache Benchmark Suite ==="
2323
@cd benchmarks && go test -run=TestBenchmarkSuite -v -timeout=300s
2424

2525
clean:

README.md

Lines changed: 83 additions & 129 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
# bdcache - Big Dumb Cache
1+
# sfcache - Stupid Fast Cache
22

3-
<img src="media/logo-small.png" alt="bdcache logo" width="256">
3+
<img src="media/logo-small.png" alt="sfcache logo" width="256">
44

5-
[![Go Reference](https://pkg.go.dev/badge/github.com/codeGROOVE-dev/bdcache.svg)](https://pkg.go.dev/github.com/codeGROOVE-dev/bdcache)
6-
[![Go Report Card](https://goreportcard.com/badge/github.com/codeGROOVE-dev/bdcache)](https://goreportcard.com/report/github.com/codeGROOVE-dev/bdcache)
5+
[![Go Reference](https://pkg.go.dev/badge/github.com/codeGROOVE-dev/sfcache.svg)](https://pkg.go.dev/github.com/codeGROOVE-dev/sfcache)
6+
[![Go Report Card](https://goreportcard.com/badge/github.com/codeGROOVE-dev/sfcache)](https://goreportcard.com/report/github.com/codeGROOVE-dev/sfcache)
77
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
88

99
<br clear="right">
1010

1111
Stupid fast in-memory Go cache with optional L2 persistence layer.
1212

13-
Designed originally for persistently caching HTTP fetches in unreliable environments like Google Cloud Run, this cache has something for everyone.
13+
Designed to persistently caching API requests in an unreliable environment, this cache has something for everyone.
1414

1515
## Features
1616

@@ -31,10 +31,10 @@ Designed originally for persistently caching HTTP fetches in unreliable environm
3131
As a stupid-fast in-memory cache:
3232

3333
```go
34-
import "github.com/codeGROOVE-dev/bdcache"
34+
import "github.com/codeGROOVE-dev/sfcache"
3535

3636
// strings as keys, ints as values
37-
cache := bdcache.Memory[string, int]()
37+
cache := sfcache.Memory[string, int]()
3838
cache.Set("answer", 42, 0)
3939
val, found := cache.Get("answer")
4040
```
@@ -43,12 +43,12 @@ or with local file persistence to survive restarts:
4343

4444
```go
4545
import (
46-
"github.com/codeGROOVE-dev/bdcache"
47-
"github.com/codeGROOVE-dev/bdcache/persist/localfs"
46+
"github.com/codeGROOVE-dev/sfcache"
47+
"github.com/codeGROOVE-dev/sfcache/persist/localfs"
4848
)
4949

5050
p, _ := localfs.New[string, User]("myapp", "")
51-
cache, _ := bdcache.Persistent[string, User](ctx, p)
51+
cache, _ := sfcache.Persistent[string, User](ctx, p)
5252

5353
cache.SetAsync(ctx, "user:123", user, 0) // Don't wait for the key to persist
5454
cache.Store.Len(ctx) // Access persistence layer directly
@@ -58,140 +58,94 @@ A persistent cache suitable for Cloud Run or local development; uses Cloud Datas
5858

5959
```go
6060
p, _ := cloudrun.New[string, User](ctx, "myapp")
61-
cache, _ := bdcache.Persistent[string, User](ctx, p)
61+
cache, _ := sfcache.Persistent[string, User](ctx, p)
6262
```
6363

6464
## Performance against the Competition
6565

66-
bdcache prioritizes high hit-rates and low read latency, but it performs quite well all around.
66+
sfcache prioritizes high hit-rates and low read latency, but it performs quite well all around.
6767

6868
Here's the results from an M4 MacBook Pro - run `make bench` to see the results for yourself:
69-
### Hit Rate (Zipf α=0.99, 1M ops, 1M keyspace)
69+
70+
```
71+
>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)
72+
73+
### Meta Trace Hit Rate (10M ops from Meta KVCache)
74+
75+
| Cache | 50K cache | 100K cache |
76+
|---------------|-----------|------------|
77+
| sfcache | 68.19% | 76.03% |
78+
| otter | 41.31% | 55.41% |
79+
| ristretto | 40.33% | 48.91% |
80+
| tinylfu | 53.70% | 54.79% |
81+
| freecache | 56.86% | 65.52% |
82+
| lru | 65.21% | 74.22% |
83+
84+
- 🔥 Meta trace: 2.4% better than next best (lru)
85+
86+
>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)
87+
88+
### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)
7089
7190
| Cache | Size=1% | Size=2.5% | Size=5% |
7291
|---------------|---------|-----------|---------|
73-
| bdcache 🟡 | 94.45% | 94.91% | 95.09% |
74-
| otter 🦦 | 94.28% | 94.69% | 95.09% |
75-
| ristretto ☕ | 91.63% | 92.44% | 93.02% |
76-
| tinylfu 🔬 | 94.31% | 94.87% | 95.09% |
77-
| freecache 🆓 | 94.03% | 94.15% | 94.75% |
78-
| lru 📚 | 94.10% | 94.84% | 95.09% |
92+
| sfcache | 64.19% | 69.23% | 72.50% |
93+
| otter | 61.64% | 67.94% | 71.38% |
94+
| ristretto | 34.88% | 41.25% | 46.62% |
95+
| tinylfu | 63.83% | 68.25% | 71.56% |
96+
| freecache | 56.65% | 57.75% | 63.39% |
97+
| lru | 57.33% | 64.55% | 69.92% |
98+
99+
- 🔥 Hit rate: 1.1% better than next best (tinylfu)
79100
80-
🏆 Hit rate: +0.1% better than 2nd best (tinylfu)
101+
>>> TestLatency: Single-Threaded Latency (go test -run=TestLatency -v)
81102
82103
### Single-Threaded Latency (sorted by Get)
83104
84105
| Cache | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
85106
|---------------|-----------|----------|------------|-----------|----------|------------|
86-
| bdcache 🟡 | 7.0 | 0 | 0 | 12.0 | 0 | 0 |
87-
| lru 📚 | 24.0 | 0 | 0 | 22.0 | 0 | 0 |
88-
| ristretto ☕ | 30.0 | 13 | 0 | 69.0 | 119 | 3 |
89-
| otter 🦦 | 32.0 | 0 | 0 | 145.0 | 51 | 1 |
90-
| freecache 🆓 | 72.0 | 15 | 1 | 57.0 | 4 | 0 |
91-
| tinylfu 🔬 | 89.0 | 3 | 0 | 106.0 | 175 | 3 |
92-
93-
🏆 Get latency: +243% faster than 2nd best (lru)
94-
🏆 Set latency: +83% faster than 2nd best (lru)
95-
96-
### Single-Threaded Throughput (mixed read/write)
97-
98-
| Cache | Get QPS | Set QPS |
99-
|---------------|------------|------------|
100-
| bdcache 🟡 | 77.36M | 61.54M |
101-
| lru 📚 | 34.69M | 35.25M |
102-
| ristretto ☕ | 29.44M | 13.61M |
103-
| otter 🦦 | 25.63M | 7.10M |
104-
| freecache 🆓 | 12.92M | 15.65M |
105-
| tinylfu 🔬 | 10.87M | 8.93M |
106-
107-
🏆 Get throughput: +123% faster than 2nd best (lru)
108-
🏆 Set throughput: +75% faster than 2nd best (lru)
109-
110-
### Concurrent Throughput (mixed read/write): 4 threads
111-
112-
| Cache | Get QPS | Set QPS |
113-
|---------------|------------|------------|
114-
| bdcache 🟡 | 45.67M | 38.65M |
115-
| otter 🦦 | 28.11M | 4.06M |
116-
| ristretto ☕ | 27.06M | 13.41M |
117-
| freecache 🆓 | 24.67M | 20.84M |
118-
| lru 📚 | 9.29M | 9.56M |
119-
| tinylfu 🔬 | 5.72M | 4.94M |
120-
121-
🏆 Get throughput: +62% faster than 2nd best (otter)
122-
🏆 Set throughput: +85% faster than 2nd best (freecache)
123-
124-
### Concurrent Throughput (mixed read/write): 8 threads
125-
126-
| Cache | Get QPS | Set QPS |
127-
|---------------|------------|------------|
128-
| bdcache 🟡 | 22.31M | 22.84M |
129-
| otter 🦦 | 19.49M | 3.30M |
130-
| ristretto ☕ | 18.67M | 11.46M |
131-
| freecache 🆓 | 17.34M | 16.36M |
132-
| lru 📚 | 7.66M | 7.75M |
133-
| tinylfu 🔬 | 4.81M | 4.11M |
134-
135-
🏆 Get throughput: +14% faster than 2nd best (otter)
136-
🏆 Set throughput: +40% faster than 2nd best (freecache)
137-
138-
### Concurrent Throughput (mixed read/write): 12 threads
139-
140-
| Cache | Get QPS | Set QPS |
141-
|---------------|------------|------------|
142-
| bdcache 🟡 | 26.25M | 24.04M |
143-
| ristretto ☕ | 21.71M | 11.49M |
144-
| otter 🦦 | 19.78M | 2.93M |
145-
| freecache 🆓 | 15.84M | 16.10M |
146-
| lru 📚 | 7.50M | 8.92M |
147-
| tinylfu 🔬 | 4.08M | 3.37M |
148-
149-
🏆 Get throughput: +21% faster than 2nd best (ristretto)
150-
🏆 Set throughput: +49% faster than 2nd best (freecache)
151-
152-
### Concurrent Throughput (mixed read/write): 16 threads
153-
154-
| Cache | Get QPS | Set QPS |
155-
|---------------|------------|------------|
156-
| bdcache 🟡 | 16.92M | 16.00M |
157-
| ristretto ☕ | 15.73M | 11.97M |
158-
| otter 🦦 | 15.70M | 2.89M |
159-
| freecache 🆓 | 14.67M | 14.42M |
160-
| lru 📚 | 7.53M | 8.07M |
161-
| tinylfu 🔬 | 4.75M | 3.41M |
162-
163-
🏆 Get throughput: +7.6% faster than 2nd best (ristretto)
164-
🏆 Set throughput: +11% faster than 2nd best (freecache)
165-
166-
### Concurrent Throughput (mixed read/write): 24 threads
167-
168-
| Cache | Get QPS | Set QPS |
169-
|---------------|------------|------------|
170-
| bdcache 🟡 | 20.08M | 16.56M |
171-
| ristretto ☕ | 16.76M | 12.81M |
172-
| otter 🦦 | 15.71M | 2.93M |
173-
| freecache 🆓 | 14.43M | 14.59M |
174-
| lru 📚 | 7.71M | 7.75M |
175-
| tinylfu 🔬 | 4.80M | 3.09M |
176-
177-
🏆 Get throughput: +20% faster than 2nd best (ristretto)
178-
🏆 Set throughput: +14% faster than 2nd best (freecache)
179-
180-
### Concurrent Throughput (mixed read/write): 32 threads
181-
182-
| Cache | Get QPS | Set QPS |
183-
|---------------|------------|------------|
184-
| bdcache 🟡 | 15.84M | 15.29M |
185-
| ristretto ☕ | 15.36M | 13.49M |
186-
| otter 🦦 | 15.04M | 2.91M |
187-
| freecache 🆓 | 14.87M | 13.95M |
188-
| lru 📚 | 7.79M | 8.23M |
189-
| tinylfu 🔬 | 5.34M | 3.09M |
190-
191-
🏆 Get throughput: +3.1% faster than 2nd best (ristretto)
192-
🏆 Set throughput: +9.6% faster than 2nd best (freecache)
193-
194-
NOTE: Performance characteristics often have trade-offs. There are almost certainly workloads where other cache implementations are faster, but nobody blends speed and persistence the way that bdcache does.
107+
| sfcache | 8.0 | 0 | 0 | 21.0 | 0 | 0 |
108+
| lru | 23.0 | 0 | 0 | 22.0 | 0 | 0 |
109+
| ristretto | 32.0 | 14 | 0 | 65.0 | 118 | 3 |
110+
| otter | 35.0 | 0 | 0 | 131.0 | 48 | 1 |
111+
| freecache | 62.0 | 8 | 1 | 49.0 | 0 | 0 |
112+
| tinylfu | 75.0 | 0 | 0 | 97.0 | 168 | 3 |
113+
114+
- 🔥 Get: 188% better than next best (lru)
115+
- 🔥 Set: 4.8% better than next best (lru)
116+
117+
>>> TestZipfThroughput1: Zipf Throughput (1 thread) (go test -run=TestZipfThroughput1 -v)
118+
119+
### Zipf Throughput (alpha=0.99, 75% read / 25% write): 1 threads
120+
121+
| Cache | QPS |
122+
|---------------|------------|
123+
| sfcache | 96.94M |
124+
| lru | 46.24M |
125+
| tinylfu | 19.21M |
126+
| freecache | 15.02M |
127+
| otter | 12.95M |
128+
| ristretto | 11.34M |
129+
130+
- 🔥 Throughput: 110% faster than next best (lru)
131+
132+
>>> TestZipfThroughput16: Zipf Throughput (16 threads) (go test -run=TestZipfThroughput16 -v)
133+
134+
### Zipf Throughput (alpha=0.99, 75% read / 25% write): 16 threads
135+
136+
| Cache | QPS |
137+
|---------------|------------|
138+
| sfcache | 43.27M |
139+
| freecache | 15.08M |
140+
| ristretto | 14.20M |
141+
| otter | 10.85M |
142+
| lru | 5.64M |
143+
| tinylfu | 4.25M |
144+
145+
- 🔥 Throughput: 187% faster than next best (freecache)
146+
```
147+
148+
Cache performance is a game of balancing trade-offs. There will be workloads where other cache implementations are better, but nobody blends speed and persistence like we do.
195149

196150
## License
197151

benchmarks/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# bdcache Benchmarks
1+
# sfcache Benchmarks
22

33
This directory contains comparison benchmarks against popular Go cache libraries.
44

@@ -32,7 +32,7 @@ Your mileage **will** vary based on:
3232

3333
### The Real Differentiator: Persistence
3434

35-
**bdcache's primary advantage isn't raw speed or hit rates** - it's the automatic per-item persistence designed for unreliable cloud environments:
35+
**sfcache's primary advantage isn't raw speed or hit rates** - it's the automatic per-item persistence designed for unreliable cloud environments:
3636

3737
- **Cloud Run** - Instances shut down unpredictably after idle periods
3838
- **Kubernetes** - Pods can be evicted, rescheduled, or killed anytime
@@ -87,7 +87,7 @@ go test -bench=BenchmarkSpeed -benchmem
8787
```
8888

8989
Compares raw Get operation performance across:
90-
- bdcache (S3-FIFO)
90+
- sfcache (S3-FIFO)
9191
- golang-lru (LRU)
9292
- otter (S3-FIFO with manual persistence)
9393
- ristretto (TinyLFU)

0 commit comments

Comments
 (0)