Skip to content

Commit 117a5d1

Browse files
Thomas StrombergThomas Stromberg
authored andcommitted
README
1 parent 5eb8462 commit 117a5d1

File tree

2 files changed

+44
-36
lines changed

2 files changed

+44
-36
lines changed

Makefile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@ tag:
88
exit 1; \
99
fi
1010
@echo "Tagging all modules with $(VERSION)..."
11-
@git tag $(VERSION)
11+
@git tag -a $(VERSION) -m "$(VERSION)"
1212
@find . -name go.mod -not -path "./go.mod" | while read mod; do \
1313
dir=$$(dirname $$mod); \
1414
dir=$${dir#./}; \
1515
echo " $$dir/$(VERSION)"; \
16-
git tag $$dir/$(VERSION); \
16+
git tag -a $$dir/$(VERSION) -m "$(VERSION)"; \
1717
done
1818
@echo ""
1919
@echo "Created tags:"

README.md

Lines changed: 42 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -65,86 +65,94 @@ cache, _ := sfcache.NewTiered[string, User](p)
6565

6666
## Performance against the Competition
6767

68-
sfcache prioritizes high hit-rates and low read latency, but it performs quite well all around.
69-
70-
Here's the results from an M4 MacBook Pro - run `make bench` to see the results for yourself:
68+
sfcache prioritizes high hit-rates and low read latency, but it's excellent all around. Run `make bench` to see the results for yourself:
7169

7270
```
73-
>>> TestLatency: Single-Threaded Latency (go test -run=TestLatency -v)
71+
>>> TestLatencyNoEviction: Latency - No Evictions (Set cycles within cache size) (go test -run=TestLatencyNoEviction -v)
72+
| Cache | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
73+
|---------------|-----------|----------|------------|-----------|----------|------------|
74+
| sfcache | 7.0 | 0 | 0 | 21.0 | 0 | 0 |
75+
| lru | 21.0 | 0 | 0 | 21.0 | 0 | 0 |
76+
| ristretto | 32.0 | 14 | 0 | 76.0 | 121 | 4 |
77+
| otter | 34.0 | 0 | 0 | 137.0 | 51 | 1 |
78+
| freecache | 57.0 | 8 | 1 | 48.0 | 0 | 0 |
79+
| tinylfu | 71.0 | 0 | 0 | 108.0 | 168 | 3 |
7480
75-
### Single-Threaded Latency (sorted by Get)
81+
- 🔥 Get: 200% better than next best (lru)
82+
- 🔥 Set: 0.000% better than next best (lru)
7683
84+
>>> TestLatencyWithEviction: Latency - With Evictions (Set uses 20x unique keys) (go test -run=TestLatencyWithEviction -v)
7785
| Cache | Get ns/op | Get B/op | Get allocs | Set ns/op | Set B/op | Set allocs |
7886
|---------------|-----------|----------|------------|-----------|----------|------------|
79-
| sfcache | 8.0 | 0 | 0 | 21.0 | 0 | 0 |
80-
| lru | 23.0 | 0 | 0 | 22.0 | 0 | 0 |
81-
| ristretto | 32.0 | 14 | 0 | 65.0 | 118 | 3 |
82-
| otter | 35.0 | 0 | 0 | 131.0 | 48 | 1 |
83-
| freecache | 62.0 | 8 | 1 | 49.0 | 0 | 0 |
84-
| tinylfu | 75.0 | 0 | 0 | 97.0 | 168 | 3 |
87+
| sfcache | 8.0 | 0 | 0 | 79.0 | 0 | 0 |
88+
| lru | 21.0 | 0 | 0 | 80.0 | 80 | 1 |
89+
| ristretto | 30.0 | 13 | 0 | 74.0 | 119 | 3 |
90+
| otter | 34.0 | 0 | 0 | 175.0 | 60 | 1 |
91+
| freecache | 58.0 | 8 | 1 | 94.0 | 1 | 0 |
92+
| tinylfu | 73.0 | 0 | 0 | 108.0 | 168 | 3 |
8593
86-
- 🔥 Get: 188% better than next best (lru)
87-
- 🔥 Set: 4.8% better than next best (lru)
94+
- 🔥 Get: 162% better than next best (lru)
95+
- 💧 Set: 6.8% worse than best (ristretto)
8896
8997
>>> TestZipfThroughput1: Zipf Throughput (1 thread) (go test -run=TestZipfThroughput1 -v)
9098
9199
### Zipf Throughput (alpha=0.99, 75% read / 25% write): 1 threads
92100
93101
| Cache | QPS |
94102
|---------------|------------|
95-
| sfcache | 96.94M |
96-
| lru | 46.24M |
97-
| tinylfu | 19.21M |
98-
| freecache | 15.02M |
99-
| otter | 12.95M |
100-
| ristretto | 11.34M |
103+
| sfcache | 98.80M |
104+
| lru | 47.40M |
105+
| tinylfu | 20.10M |
106+
| freecache | 15.59M |
107+
| otter | 13.37M |
108+
| ristretto | 11.41M |
101109
102-
- 🔥 Throughput: 110% faster than next best (lru)
110+
- 🔥 Throughput: 108% faster than next best (lru)
103111
104112
>>> TestZipfThroughput16: Zipf Throughput (16 threads) (go test -run=TestZipfThroughput16 -v)
105113
106114
### Zipf Throughput (alpha=0.99, 75% read / 25% write): 16 threads
107115
108116
| Cache | QPS |
109117
|---------------|------------|
110-
| sfcache | 43.27M |
118+
| sfcache | 42.18M |
111119
| freecache | 15.08M |
112-
| ristretto | 14.20M |
113-
| otter | 10.85M |
114-
| lru | 5.64M |
115-
| tinylfu | 4.25M |
120+
| ristretto | 14.10M |
121+
| otter | 10.70M |
122+
| lru | 6.03M |
123+
| tinylfu | 4.21M |
116124
117-
- 🔥 Throughput: 187% faster than next best (freecache)
125+
- 🔥 Throughput: 180% faster than next best (freecache)
118126
119127
>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)
120128
121129
### Meta Trace Hit Rate (10M ops from Meta KVCache)
122130
123131
| Cache | 50K cache | 100K cache |
124132
|---------------|-----------|------------|
125-
| sfcache | 68.19% | 76.03% |
126-
| otter | 41.31% | 55.41% |
127-
| ristretto | 40.33% | 48.91% |
133+
| sfcache | 68.53% | 76.34% |
134+
| otter | 41.37% | 56.14% |
135+
| ristretto | 40.35% | 48.95% |
128136
| tinylfu | 53.70% | 54.79% |
129137
| freecache | 56.86% | 65.52% |
130138
| lru | 65.21% | 74.22% |
131139
132-
- 🔥 Meta trace: 2.4% better than next best (lru)
140+
- 🔥 Meta trace: 2.9% better than next best (lru)
133141
134142
>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)
135143
136144
### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)
137145
138146
| Cache | Size=1% | Size=2.5% | Size=5% |
139147
|---------------|---------|-----------|---------|
140-
| sfcache | 64.19% | 69.23% | 72.50% |
141-
| otter | 61.64% | 67.94% | 71.38% |
142-
| ristretto | 34.88% | 41.25% | 46.62% |
148+
| sfcache | 64.41% | 69.24% | 72.57% |
149+
| otter | 62.28% | 67.81% | 71.42% |
150+
| ristretto | 34.87% | 41.25% | 46.49% |
143151
| tinylfu | 63.83% | 68.25% | 71.56% |
144152
| freecache | 56.65% | 57.75% | 63.39% |
145153
| lru | 57.33% | 64.55% | 69.92% |
146154
147-
- 🔥 Hit rate: 1.1% better than next best (tinylfu)
155+
- 🔥 Hit rate: 1.3% better than next best (tinylfu)
148156
```
149157

150158
Cache performance is a game of balancing trade-offs. There will be workloads where other cache implementations are better, but nobody blends speed and persistence like we do.

0 commit comments

Comments
 (0)