Skip to content

Commit 38320da

Browse files
Thomas StrombergThomas Stromberg
authored andcommitted
Add test coverage, update README
1 parent 200b808 commit 38320da

File tree

21 files changed

+1189
-945
lines changed

21 files changed

+1189
-945
lines changed

.github/workflows/ci.yml

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
name: CI
2+
3+
on:
4+
push:
5+
branches: [main]
6+
pull_request:
7+
branches: [main]
8+
9+
jobs:
10+
test:
11+
runs-on: ubuntu-latest
12+
steps:
13+
- uses: actions/checkout@v4
14+
15+
- name: Set up Go
16+
uses: actions/setup-go@v5
17+
with:
18+
go-version: "1.24"
19+
20+
- name: Run tests
21+
run: make test
22+
23+
- name: Generate coverage
24+
run: make coverage
25+
26+
- name: Upload coverage to Codecov
27+
uses: codecov/codecov-action@v4
28+
with:
29+
files: coverage.out
30+
fail_ci_if_error: false
31+
env:
32+
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
33+
34+
lint:
35+
runs-on: ubuntu-latest
36+
steps:
37+
- uses: actions/checkout@v4
38+
39+
- name: Set up Go
40+
uses: actions/setup-go@v5
41+
with:
42+
go-version: "1.24"
43+
44+
- name: Run lint
45+
run: make lint
46+
47+
benchmark:
48+
runs-on: ubuntu-latest
49+
steps:
50+
- uses: actions/checkout@v4
51+
52+
- name: Set up Go
53+
uses: actions/setup-go@v5
54+
with:
55+
go-version: "1.24"
56+
57+
- name: Run competitive benchmark
58+
run: make competitive-benchmark

Makefile

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
.PHONY: test lint bench benchmark competitive-benchmark clean tag release update
1+
.PHONY: test lint bench benchmark competitive-benchmark coverage clean tag release update
22

33
# Tag all modules in the repository with a version
44
# Usage: make tag VERSION=v1.2.3
@@ -71,6 +71,9 @@ benchmark:
7171
competitive-benchmark:
7272
go run ./benchmarks/runner.go -competitive
7373

74+
coverage:
75+
go test -coverprofile=coverage.out -covermode=atomic ./...
76+
7477
clean:
7578
go clean -testcache
7679

README.md

Lines changed: 24 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,19 @@
1+
<p align="center">
2+
<img src="media/logo-small.png" alt="multicache logo" width="200">
3+
</p>
4+
15
# multicache
26

3-
multicache is an absurdly fast multi-threaded multi-tiered in-memory cache library for Go -- it offers higher performance than any other option ever created for the language.
7+
[![CI](https://github.com/codeGROOVE-dev/multicache/actions/workflows/ci.yml/badge.svg)](https://github.com/codeGROOVE-dev/multicache/actions/workflows/ci.yml)
8+
[![Go Report Card](https://goreportcard.com/badge/github.com/codeGROOVE-dev/multicache)](https://goreportcard.com/report/github.com/codeGROOVE-dev/multicache)
9+
[![Go Reference](https://pkg.go.dev/badge/github.com/codeGROOVE-dev/multicache.svg)](https://pkg.go.dev/github.com/codeGROOVE-dev/multicache)
10+
[![codecov](https://codecov.io/gh/codeGROOVE-dev/multicache/graph/badge.svg)](https://codecov.io/gh/codeGROOVE-dev/multicache)
11+
[![Release](https://img.shields.io/github/v/release/codeGROOVE-dev/multicache)](https://github.com/codeGROOVE-dev/multicache/releases)
12+
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
13+
14+
multicache is the most well-rounded cache implementation for Go today.
415

5-
It offers optional persistence with compression, and has been specifically optimized for Cloud Compute environments where the process is periodically restarted, such as Kubernetes or Google Cloud Run.
16+
Designed for real-world applications in unstable environments, it has a higher average hit rate, higher throughput, and lower latency for production workloads than any other cache. To deal with process eviction in environments like Kubernetes, Cloud Run, or Borg, it also offers an optional persistence tier.
617

718
## Install
819

@@ -24,8 +35,8 @@ With persistence:
2435
store, _ := localfs.New[string, User]("myapp", "")
2536
cache, _ := multicache.NewTiered(store)
2637

27-
cache.Set(ctx, "user:123", user) // sync write
28-
cache.SetAsync(ctx, "user:456", user) // async write
38+
_ = cache.Set(ctx, "user:123", user) // sync write
39+
_ = cache.SetAsync(ctx, "user:456", user) // async write
2940
```
3041

3142
GetSet deduplicates concurrent loads to prevent thundering herd situations:
@@ -62,14 +73,14 @@ multicache has been exhaustively tested for performance using [gocachemark](http
6273

6374
Where multicache wins:
6475

65-
- **Throughput**: 954M int gets/sec at 16 threads (2.2X faster than otter). 140M string sets/sec (9X faster than otter).
66-
- **Hit rate**: Wins 7 of 9 workloads. Highest average across all datasets (+2.9% vs otter, +0.9% vs sieve).
67-
- **Latency**: 8ns int gets, 10ns string gets, zero allocations (4X lower latency than otter)
76+
- **Throughput**: 551M int gets/sec avg (2.4X faster than otter). 89M string sets/sec avg (27X faster than otter).
77+
- **Hit rate**: Wins 6 of 9 workloads. Highest average across all datasets (+2.7% vs otter, +0.9% vs sieve).
78+
- **Latency**: 8ns int gets, 10ns string gets, zero allocations (3.5X lower latency than otter)
6879

6980
Where others win:
7081

71-
- **Memory**: freelru and otter use less memory per entry (73 bytes/item overhead vs 15 for otter)
72-
- **Specific workloads**: clock +0.07% on ibm-docker, theine +0.34% on zipf
82+
- **Memory**: freelru and otter use less memory per entry (73 bytes/item overhead vs 14 for otter)
83+
- **Specific workloads**: sieve +0.5% on thesios-block, clock +0.1% on ibm-docker, theine +0.6% on zipf
7384

7485
Much of the credit for high throughput goes to [puzpuzpuz/xsync](https://github.com/puzpuzpuz/xsync) and its lock-free data structures.
7586

@@ -81,11 +92,11 @@ multicache uses [S3-FIFO](https://s3fifo.com/), which features three queues: sma
8192

8293
multicache has been hyper-tuned for high performance, and deviates from the original paper in a handful of ways:
8394

84-
- **Tuned small queue** - 90% vs paper's 10%, tuned via binary search to maximize average hit rate across 9 production traces
95+
- **Tuned small queue** - 13.7% vs paper's 10%, tuned via binary search to maximize average hit rate across 9 production traces
8596
- **Full ghost frequency restoration** - returning keys restore 100% of their previous access count
86-
- **Reduced frequency cap** - max freq=2 vs paper's 3, tuned via binary search for best average hit rate
87-
- **Hot item demotion** - items accessed at least once (peakFreq≥1) get demoted to small queue instead of evicted
88-
- **Extended ghost capacity** - 8x cache size for ghost tracking, tuned via binary search
97+
- **Increased frequency cap** - max freq=5 vs paper's 3, tuned via binary search for best average hit rate
98+
- **Death row** - hot items (high peakFreq) get a second chance before eviction
99+
- **Extended ghost capacity** - 1.22x cache size for ghost tracking, tuned via binary search
89100
- **Ghost frequency ring buffer** - fixed-size 256-entry ring replaces map allocations
90101

91102
## License

0 commit comments

Comments
 (0)