You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do flush explicitly. It is necessary in this library's API.
Started measuring byte size in another, more direct mechanism.
There's a significant difference between the two forms of measurement
(the checkSize method gives much flatter results than the look at the blockstore does!);
I don't yet understand why this is or what significance it might (or might not) have.
BenchmarkSet now has two variants: bulk flush and individual flush.
The behavior in terms of how many new blocks are created varies markedly, as you'd expect.
Include more bitsizes. I want to gather enough datapoints to draw interesting charts.
Surely we'll expect to see some predictable change in the scaling curves across this dimension?
Histograms of the sizes of blocks that appear in storage are now available.
I've commented them back out after making some observations; they produce quite a lot of noise in output.
Copy file name to clipboardExpand all lines: hamt_bench_test.go
+60-11Lines changed: 60 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -62,15 +62,15 @@ func init() {
62
62
1,
63
63
10,
64
64
100,
65
-
1000, // aka 1M
66
-
10000, // aka 10M -- you'll need a lot of RAM for this. Also, some patience.
65
+
1000, // aka 1M
66
+
//10000, // aka 10M -- you'll need a lot of RAM for this. Also, some patience.
67
67
}
68
68
bitwidths:= []int{
69
69
3,
70
-
//4,
70
+
4,
71
71
5,
72
-
//6,
73
-
//7,
72
+
6,
73
+
7,
74
74
8,
75
75
}
76
76
// bucketsize-aka-arraywidth? maybe someday.
@@ -81,13 +81,20 @@ func init() {
81
81
}
82
82
}
83
83
84
-
// BenchmarkFill creates a large HAMT, and measures how long it takes to generate all of this many entries.
84
+
// Histograms of blocksizes can be logged from some of the following functions, but are commented out.
85
+
// The main thing to check for in those is whether there are any exceptionally small blocks being produced:
86
+
// less than 64 bytes is a bit concerning because we assume there's some overhead per block in most operations (even if the exact amount may vary situationally).
87
+
// We do see some of these small blocks with small bitwidth parameters (e.g. 3), but almost none with larger bitwidth parameters.
88
+
89
+
// BenchmarkFill creates a large HAMT, and measures how long it takes to generate all of this many entries;
90
+
// the number of entries is varied in sub-benchmarks, denoted by their "n=" label component.
91
+
// Flush is done once for the entire structure, meaning the number of blocks generated per entry can be much fewer than 1.
85
92
//
86
93
// The number of blocks saved to the blockstore per entry is reported, and the total content size in bytes.
87
94
// The nanoseconds-per-op report on this function is not very useful, because the size of "op" varies with "n" between benchmarks.
88
95
//
89
-
// See "BenchmarkSet" for a probe of how long it takes to set additional entries in an already-large hamt
90
-
// (this gives a more interesting and useful nanoseconds-per-op).
96
+
// See "BenchmarkSet*" for a probe of how long it takes to set additional entries in an already-large hamt
97
+
// (this gives a more interesting and useful nanoseconds-per-op indicators).
// BenchmarkSet creates a large HAMT, then resets the timer, and does another 1000 inserts,
128
+
// BenchmarkSetBulk creates a large HAMT, then resets the timer, and does another 1000 inserts,
115
129
// measuring the time taken for this second batch of inserts.
130
+
// Flushing happens once after all 1000 inserts.
116
131
//
117
132
// The number of *additional* blocks per entry is reported.
118
-
funcBenchmarkSet(b*testing.B) {
133
+
// This number is usually less than one, because the bulk flush means changes might be amortized.
134
+
funcBenchmarkSetBulk(b*testing.B) {
135
+
doBenchmarkSetSuite(b, false)
136
+
}
137
+
138
+
// BenchmarkSetIndividual is the same as BenchmarkSetBulk, but flushes more.
139
+
// Flush happens per insert.
140
+
//
141
+
// The number of *additional* blocks per entry is reported.
142
+
// Since we flush each insert individually, this number should be at least 1 --
143
+
// however, since we choose random keys, it can still turn out lower if keys happen to collide.
144
+
// (The Set method does not make it possible to adjust our denominator to compensate for this: it does not yield previous values nor indicators of prior presense.)
0 commit comments