You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -68,36 +70,6 @@ sfcache prioritizes high hit-rates and low read latency, but it performs quite w
68
70
Here's the results from an M4 MacBook Pro - run `make bench` to see the results for yourself:
69
71
70
72
```
71
-
>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)
72
-
73
-
### Meta Trace Hit Rate (10M ops from Meta KVCache)
74
-
75
-
| Cache | 50K cache | 100K cache |
76
-
|---------------|-----------|------------|
77
-
| sfcache | 68.19% | 76.03% |
78
-
| otter | 41.31% | 55.41% |
79
-
| ristretto | 40.33% | 48.91% |
80
-
| tinylfu | 53.70% | 54.79% |
81
-
| freecache | 56.86% | 65.52% |
82
-
| lru | 65.21% | 74.22% |
83
-
84
-
- 🔥 Meta trace: 2.4% better than next best (lru)
85
-
86
-
>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)
87
-
88
-
### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)
89
-
90
-
| Cache | Size=1% | Size=2.5% | Size=5% |
91
-
|---------------|---------|-----------|---------|
92
-
| sfcache | 64.19% | 69.23% | 72.50% |
93
-
| otter | 61.64% | 67.94% | 71.38% |
94
-
| ristretto | 34.88% | 41.25% | 46.62% |
95
-
| tinylfu | 63.83% | 68.25% | 71.56% |
96
-
| freecache | 56.65% | 57.75% | 63.39% |
97
-
| lru | 57.33% | 64.55% | 69.92% |
98
-
99
-
- 🔥 Hit rate: 1.1% better than next best (tinylfu)
100
-
101
73
>>> TestLatency: Single-Threaded Latency (go test -run=TestLatency -v)
102
74
103
75
### Single-Threaded Latency (sorted by Get)
@@ -143,6 +115,36 @@ Here's the results from an M4 MacBook Pro - run `make bench` to see the results
143
115
| tinylfu | 4.25M |
144
116
145
117
- 🔥 Throughput: 187% faster than next best (freecache)
118
+
119
+
>>> TestMetaTrace: Meta Trace Hit Rate (10M ops) (go test -run=TestMetaTrace -v)
120
+
121
+
### Meta Trace Hit Rate (10M ops from Meta KVCache)
122
+
123
+
| Cache | 50K cache | 100K cache |
124
+
|---------------|-----------|------------|
125
+
| sfcache | 68.19% | 76.03% |
126
+
| otter | 41.31% | 55.41% |
127
+
| ristretto | 40.33% | 48.91% |
128
+
| tinylfu | 53.70% | 54.79% |
129
+
| freecache | 56.86% | 65.52% |
130
+
| lru | 65.21% | 74.22% |
131
+
132
+
- 🔥 Meta trace: 2.4% better than next best (lru)
133
+
134
+
>>> TestHitRate: Zipf Hit Rate (go test -run=TestHitRate -v)
135
+
136
+
### Hit Rate (Zipf alpha=0.99, 1M ops, 1M keyspace)
137
+
138
+
| Cache | Size=1% | Size=2.5% | Size=5% |
139
+
|---------------|---------|-----------|---------|
140
+
| sfcache | 64.19% | 69.23% | 72.50% |
141
+
| otter | 61.64% | 67.94% | 71.38% |
142
+
| ristretto | 34.88% | 41.25% | 46.62% |
143
+
| tinylfu | 63.83% | 68.25% | 71.56% |
144
+
| freecache | 56.65% | 57.75% | 63.39% |
145
+
| lru | 57.33% | 64.55% | 69.92% |
146
+
147
+
- 🔥 Hit rate: 1.1% better than next best (tinylfu)
146
148
```
147
149
148
150
Cache performance is a game of balancing trade-offs. There will be workloads where other cache implementations are better, but nobody blends speed and persistence like we do.
0 commit comments