Skip to content

Commit 8554555

Browse files
Thomas StrombergThomas Stromberg
authored andcommitted
make terminology more consistent
1 parent 3bb3647 commit 8554555

18 files changed

+427
-424
lines changed

Makefile

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@ bench:
1313
go test -bench=. -benchmem
1414

1515
# Run the 5 key benchmarks (~3-5min):
16-
# 1. Meta Trace Hit Rate (real-world)
17-
# 2. Zipf Hit Rate (synthetic)
18-
# 3. Single-Threaded Latency
19-
# 4. Zipf Throughput (1 thread)
20-
# 5. Zipf Throughput (16 threads)
16+
# 1. Single-Threaded Latency
17+
# 2. Zipf Throughput (1 thread)
18+
# 3. Zipf Throughput (16 threads)
19+
# 4. Meta Trace Hit Rate (real-world)
20+
# 5. Zipf Hit Rate (synthetic)
2121
benchmark:
2222
@echo "=== sfcache Benchmark Suite ==="
2323
@cd benchmarks && go test -run=TestBenchmarkSuite -v -timeout=300s

README.md

Lines changed: 37 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,10 @@ Designed for persistently caching API requests in an unreliable environment, thi
1717
- **Faster than a bat out of hell** - Best-in-class latency and throughput
1818
- **S3-FIFO eviction** - Better hit-rates than LRU ([learn more](https://s3fifo.com/))
1919
- **L2 Persistence (optional)** - Bring your own database or use built-in backends:
20-
- [`persist/localfs`](persist/localfs) - Local files (gob encoding, zero dependencies)
21-
- [`persist/datastore`](persist/datastore) - Google Cloud Datastore
22-
- [`persist/valkey`](persist/valkey) - Valkey/Redis
23-
- [`persist/cloudrun`](persist/cloudrun) - Auto-detect Cloud Run
20+
- [`pkg/persist/localfs`](pkg/persist/localfs) - Local files (gob encoding, zero dependencies)
21+
- [`pkg/persist/datastore`](pkg/persist/datastore) - Google Cloud Datastore
22+
- [`pkg/persist/valkey`](pkg/persist/valkey) - Valkey/Redis
23+
- [`pkg/persist/cloudrun`](pkg/persist/cloudrun) - Auto-detect Cloud Run
2424
- **Per-item TTL** - Optional expiration
2525
- **Graceful degradation** - Cache works even if persistence fails
2626
- **Zero allocation reads** - minimal GC thrashing
@@ -44,7 +44,7 @@ or with local file persistence to survive restarts:
4444
```go
4545
import (
4646
"github.com/codeGROOVE-dev/sfcache"
47-
"github.com/codeGROOVE-dev/sfcache/persist/localfs"
47+
"github.com/codeGROOVE-dev/sfcache/pkg/persist/localfs"
4848
)
4949

5050
p, _ := localfs.New[string, User]("myapp", "")
@@ -57,6 +57,8 @@ cache.Store.Len(ctx) // Access persistence layer directly
5757
A persistent cache suitable for Cloud Run or local development; uses Cloud Datastore if available
5858

5959
```go
60+
import "github.com/codeGROOVE-dev/sfcache/pkg/persist/cloudrun"
61+
6062
p, _ := cloudrun.New[string, User](ctx, "myapp")
6163
cache, _ := sfcache.Persistent[string, User](ctx, p)
6264
```
@@ -68,36 +70,6 @@ sfcache prioritizes high hit-rates and low read latency, but it performs quite w
6870
Here's the results from an M4 MacBook Pro - run `make bench` to see the results for yourself:
6971

7072
```
71-
>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)
72-
73-
### Meta Trace Hit Rate (10M ops from Meta KVCache)
74-
75-
| Cache | 50K cache | 100K cache |
76-
|---------------|-----------|------------|
77-
| sfcache | 68.19% | 76.03% |
78-
| otter | 41.31% | 55.41% |
79-
| ristretto | 40.33% | 48.91% |
80-
| tinylfu | 53.70% | 54.79% |
81-
| freecache | 56.86% | 65.52% |
82-
| lru | 65.21% | 74.22% |
83-
84-
- 🔥 Meta trace: 2.4% better than next best (lru)
85-
86-
>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)
87-
88-
### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)
89-
90-
| Cache | Size=1% | Size=2.5% | Size=5% |
91-
|---------------|---------|-----------|---------|
92-
| sfcache | 64.19% | 69.23% | 72.50% |
93-
| otter | 61.64% | 67.94% | 71.38% |
94-
| ristretto | 34.88% | 41.25% | 46.62% |
95-
| tinylfu | 63.83% | 68.25% | 71.56% |
96-
| freecache | 56.65% | 57.75% | 63.39% |
97-
| lru | 57.33% | 64.55% | 69.92% |
98-
99-
- 🔥 Hit rate: 1.1% better than next best (tinylfu)
100-
10173
>>> TestLatency: Single-Threaded Latency (go test -run=TestLatency -v)
10274
10375
### Single-Threaded Latency (sorted by Get)
@@ -143,6 +115,36 @@ Here's the results from an M4 MacBook Pro - run `make bench` to see the results
143115
| tinylfu | 4.25M |
144116
145117
- 🔥 Throughput: 187% faster than next best (freecache)
118+
119+
>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)
120+
121+
### Meta Trace Hit Rate (10M ops from Meta KVCache)
122+
123+
| Cache | 50K cache | 100K cache |
124+
|---------------|-----------|------------|
125+
| sfcache | 68.19% | 76.03% |
126+
| otter | 41.31% | 55.41% |
127+
| ristretto | 40.33% | 48.91% |
128+
| tinylfu | 53.70% | 54.79% |
129+
| freecache | 56.86% | 65.52% |
130+
| lru | 65.21% | 74.22% |
131+
132+
- 🔥 Meta trace: 2.4% better than next best (lru)
133+
134+
>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)
135+
136+
### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)
137+
138+
| Cache | Size=1% | Size=2.5% | Size=5% |
139+
|---------------|---------|-----------|---------|
140+
| sfcache | 64.19% | 69.23% | 72.50% |
141+
| otter | 61.64% | 67.94% | 71.38% |
142+
| ristretto | 34.88% | 41.25% | 46.62% |
143+
| tinylfu | 63.83% | 68.25% | 71.56% |
144+
| freecache | 56.65% | 57.75% | 63.39% |
145+
| lru | 57.33% | 64.55% | 69.92% |
146+
147+
- 🔥 Hit rate: 1.1% better than next best (tinylfu)
146148
```
147149

148150
Cache performance is a game of balancing trade-offs. There will be workloads where other cache implementations are better, but nobody blends speed and persistence like we do.

benchmarks/benchmark_test.go

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -35,25 +35,25 @@ func TestBenchmarkSuite(t *testing.T) {
3535
fmt.Println("sfcache benchmark bake-off")
3636
fmt.Println()
3737

38-
// 1. Real-world hit rate from Meta KVCache production trace
39-
printTestHeader("TestMetaTrace", "Meta Trace Hit Rate (10M ops)")
40-
runMetaTraceHitRate()
41-
42-
// 2. Synthetic hit rate with Zipf distribution
43-
printTestHeader("TestHitRate", "Zipf Hit Rate")
44-
runHitRateBenchmark()
45-
46-
// 3. Single-threaded latency
38+
// 1. Single-threaded latency
4739
printTestHeader("TestLatency", "Single-Threaded Latency")
4840
runPerformanceBenchmark()
4941

50-
// 4. Single-threaded throughput (Zipf)
42+
// 2. Single-threaded throughput (Zipf)
5143
printTestHeader("TestZipfThroughput1", "Zipf Throughput (1 thread)")
5244
runZipfThroughputBenchmark(1)
5345

54-
// 5. Multi-threaded throughput (Zipf)
46+
// 3. Multi-threaded throughput (Zipf)
5547
printTestHeader("TestZipfThroughput16", "Zipf Throughput (16 threads)")
5648
runZipfThroughputBenchmark(16)
49+
50+
// 4. Real-world hit rate from Meta KVCache production trace
51+
printTestHeader("TestMetaTrace", "Meta Trace Hit Rate (10M ops)")
52+
runMetaTraceHitRate()
53+
54+
// 5. Synthetic hit rate with Zipf distribution
55+
printTestHeader("TestHitRate", "Zipf Hit Rate")
56+
runHitRateBenchmark()
5757
}
5858

5959
func printTestHeader(testName, description string) {

memory.go

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ func Memory[K comparable, V any](opts ...Option) *MemoryCache[K, V] {
4040
// Get retrieves a value from the cache.
4141
// Returns the value and true if found, or the zero value and false if not found.
4242
func (c *MemoryCache[K, V]) Get(key K) (V, bool) {
43-
return c.memory.getFromMemory(key)
43+
return c.memory.get(key)
4444
}
4545

4646
// GetOrSet retrieves a value from the cache, or computes and stores it if not found.
@@ -49,7 +49,7 @@ func (c *MemoryCache[K, V]) Get(key K) (V, bool) {
4949
// This is optimized to perform a single shard lookup and lock acquisition.
5050
func (c *MemoryCache[K, V]) GetOrSet(key K, loader func() V, ttl ...time.Duration) V {
5151
// We can't use the optimized path with a loader since we'd hold the lock during loader()
52-
if val, ok := c.memory.getFromMemory(key); ok {
52+
if val, ok := c.memory.get(key); ok {
5353
return val
5454
}
5555
val := loader()
@@ -65,34 +65,34 @@ func (c *MemoryCache[K, V]) SetIfAbsent(key K, value V, ttl ...time.Duration) (V
6565
if len(ttl) > 0 {
6666
t = ttl[0]
6767
}
68-
return c.memory.getOrSetMemory(key, value, timeToNano(c.expiry(t)))
68+
return c.memory.getOrSet(key, value, timeToNano(c.expiry(t)))
6969
}
7070

7171
// Set stores a value in the cache.
7272
// If no TTL is provided, the default TTL is used.
73-
// If no default TTL is configured, the item never expires.
73+
// If no default TTL is configured, the entry never expires.
7474
func (c *MemoryCache[K, V]) Set(key K, value V, ttl ...time.Duration) {
7575
var t time.Duration
7676
if len(ttl) > 0 {
7777
t = ttl[0]
7878
}
79-
c.memory.setToMemory(key, value, timeToNano(c.expiry(t)))
79+
c.memory.set(key, value, timeToNano(c.expiry(t)))
8080
}
8181

8282
// Delete removes a value from the cache.
8383
func (c *MemoryCache[K, V]) Delete(key K) {
84-
c.memory.deleteFromMemory(key)
84+
c.memory.del(key)
8585
}
8686

87-
// Len returns the number of items in the cache.
87+
// Len returns the number of entries in the cache.
8888
func (c *MemoryCache[K, V]) Len() int {
89-
return c.memory.memoryLen()
89+
return c.memory.len()
9090
}
9191

9292
// Flush removes all entries from the cache.
9393
// Returns the number of entries removed.
9494
func (c *MemoryCache[K, V]) Flush() int {
95-
return c.memory.flushMemory()
95+
return c.memory.flush()
9696
}
9797

9898
// Close releases resources held by the cache.
@@ -132,7 +132,7 @@ func defaultConfig() *config {
132132
// Option configures a MemoryCache or PersistentCache.
133133
type Option func(*config)
134134

135-
// WithSize sets the maximum number of items in the memory cache.
135+
// WithSize sets the maximum number of entries in the memory cache.
136136
func WithSize(n int) Option {
137137
return func(c *config) {
138138
c.size = n
@@ -155,8 +155,8 @@ func WithGhostRatio(r float64) Option {
155155
}
156156
}
157157

158-
// WithTTL sets the default TTL for cache items.
159-
// Items without an explicit TTL will use this value.
158+
// WithTTL sets the default TTL for cache entries.
159+
// Entries without an explicit TTL will use this value.
160160
func WithTTL(d time.Duration) Option {
161161
return func(c *config) {
162162
c.defaultTTL = d

persistent.go

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ type PersistentCache[K comparable, V any] struct {
1717
// cache.Store.Len(ctx)
1818
// cache.Store.Flush(ctx)
1919
// cache.Store.Cleanup(ctx, maxAge)
20-
Store persist.Layer[K, V]
20+
Store persist.Store[K, V]
2121

2222
memory *s3fifo[K, V]
2323
defaultTTL time.Duration
@@ -43,7 +43,7 @@ type PersistentCache[K comparable, V any] struct {
4343
// cache.Set(ctx, "user:123", user, time.Hour) // explicit TTL
4444
// user, ok, err := cache.Get(ctx, "user:123")
4545
// storeCount, _ := cache.Store.Len(ctx)
46-
func Persistent[K comparable, V any](ctx context.Context, p persist.Layer[K, V], opts ...Option) (*PersistentCache[K, V], error) {
46+
func Persistent[K comparable, V any](ctx context.Context, p persist.Store[K, V], opts ...Option) (*PersistentCache[K, V], error) {
4747
cfg := defaultConfig()
4848
for _, opt := range opts {
4949
opt(cfg)
@@ -77,7 +77,7 @@ func (c *PersistentCache[K, V]) doWarmup(ctx context.Context) {
7777
entryCh, errCh := c.Store.LoadRecent(ctx, c.warmup)
7878

7979
for entry := range entryCh {
80-
c.memory.setToMemory(entry.Key, entry.Value, timeToNano(entry.Expiry))
80+
c.memory.set(entry.Key, entry.Value, timeToNano(entry.Expiry))
8181
}
8282

8383
// Drain error channel (errors silently ignored for best-effort warmup)
@@ -90,7 +90,7 @@ func (c *PersistentCache[K, V]) doWarmup(ctx context.Context) {
9090
//nolint:gocritic // unnamedResult - public API signature is intentionally clear without named returns
9191
func (c *PersistentCache[K, V]) Get(ctx context.Context, key K) (V, bool, error) {
9292
// Check memory first
93-
if val, ok := c.memory.getFromMemory(key); ok {
93+
if val, ok := c.memory.get(key); ok {
9494
return val, true, nil
9595
}
9696

@@ -102,7 +102,7 @@ func (c *PersistentCache[K, V]) Get(ctx context.Context, key K) (V, bool, error)
102102
}
103103

104104
// Check persistence
105-
val, expiry, found, err := c.Store.Load(ctx, key)
105+
val, expiry, found, err := c.Store.Get(ctx, key)
106106
if err != nil {
107107
return zero, false, fmt.Errorf("persistence load: %w", err)
108108
}
@@ -112,7 +112,7 @@ func (c *PersistentCache[K, V]) Get(ctx context.Context, key K) (V, bool, error)
112112
}
113113

114114
// Add to memory cache for future hits
115-
c.memory.setToMemory(key, val, timeToNano(expiry))
115+
c.memory.set(key, val, timeToNano(expiry))
116116

117117
return val, true, nil
118118
}
@@ -171,10 +171,10 @@ func (c *PersistentCache[K, V]) Set(ctx context.Context, key K, value V, ttl ...
171171
}
172172

173173
// ALWAYS update memory first - reliability guarantee
174-
c.memory.setToMemory(key, value, timeToNano(expiry))
174+
c.memory.set(key, value, timeToNano(expiry))
175175

176176
// Update persistence
177-
if err := c.Store.Store(ctx, key, value, expiry); err != nil {
177+
if err := c.Store.Set(ctx, key, value, expiry); err != nil {
178178
return fmt.Errorf("persistence store failed: %w", err)
179179
}
180180

@@ -199,14 +199,14 @@ func (c *PersistentCache[K, V]) SetAsync(ctx context.Context, key K, value V, tt
199199
}
200200

201201
// ALWAYS update memory first - reliability guarantee (synchronous)
202-
c.memory.setToMemory(key, value, timeToNano(expiry))
202+
c.memory.set(key, value, timeToNano(expiry))
203203

204204
// Update persistence asynchronously (fire-and-forget)
205205
//nolint:contextcheck // Intentionally detached - persistence should complete even if caller cancels
206206
go func() {
207207
storeCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
208208
defer cancel()
209-
if err := c.Store.Store(storeCtx, key, value, expiry); err != nil {
209+
if err := c.Store.Set(storeCtx, key, value, expiry); err != nil {
210210
slog.Error("async persistence failed", "key", key, "error", err)
211211
}
212212
}()
@@ -218,7 +218,7 @@ func (c *PersistentCache[K, V]) SetAsync(ctx context.Context, key K, value V, tt
218218
// The value is always removed from memory. Returns an error if persistence deletion fails.
219219
func (c *PersistentCache[K, V]) Delete(ctx context.Context, key K) error {
220220
// Remove from memory first (always succeeds)
221-
c.memory.deleteFromMemory(key)
221+
c.memory.del(key)
222222

223223
// Validate key before accessing persistence (security: prevent path traversal)
224224
if err := c.Store.ValidateKey(key); err != nil {
@@ -235,7 +235,7 @@ func (c *PersistentCache[K, V]) Delete(ctx context.Context, key K) error {
235235
// Flush removes all entries from the cache, including persistent storage.
236236
// Returns the total number of entries removed from memory and persistence.
237237
func (c *PersistentCache[K, V]) Flush(ctx context.Context) (int, error) {
238-
memoryRemoved := c.memory.flushMemory()
238+
memoryRemoved := c.memory.flush()
239239

240240
persistRemoved, err := c.Store.Flush(ctx)
241241
if err != nil {
@@ -245,10 +245,10 @@ func (c *PersistentCache[K, V]) Flush(ctx context.Context) (int, error) {
245245
return memoryRemoved + persistRemoved, nil
246246
}
247247

248-
// Len returns the number of items in the memory cache.
249-
// For persistence item count, use cache.Store.Len(ctx).
248+
// Len returns the number of entries in the memory cache.
249+
// For persistence entry count, use cache.Store.Len(ctx).
250250
func (c *PersistentCache[K, V]) Len() int {
251-
return c.memory.memoryLen()
251+
return c.memory.len()
252252
}
253253

254254
// Close releases resources held by the cache.

0 commit comments

Comments
 (0)