You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fill in the scale table a bit more; comments with instructions for charting.
The largest scale entry in the table is commented out because it's an OOM-causer on some of my hardware;
and I'd expect to get some impression of progression from these four orders-of-mag-base-10.
I added (roughly) halfway points on the scale table just to try to fill things out a bit more.
(You'll see why in the results discussion; some things are noisier than expected.)
I'm leaving the charting tools to be handled out of band;
I don't think adding them as transitive dependencies of this repo would be good.
Results?
I'm a little perplexed, actually.
- BenchmarkFill-blocks-per-entry-vs-scale.svg ...
- is completely all over the map. There appears to be *no* correlation between the bitwidth parameter and growth rate for block counts.
- BenchmarkFill-totalBytes-per-entry-vs-scale.svg ...
- also completely all over. No correlations.
- BenchmarkFind-speed-vs-scale.svg ...
- does sort of trend up as one would expect;
- the noise is almost bigger than the signal, which is certainly interesting (but not wrong).
- There's oddness for bitwidth=8: unlike all others, it's actually slower at the smallest scales. Maybe worth questioning?
- BenchmarkSetBulk-addntlBlocks-per-addntlEntry-vs-scale.svg and BenchmarkSetIndividual-addntlBlocks-per-addntlEntry-vs-scale.svg ...
- these both have reasonable up-and-to-the-right trends.
- with batched flush, results are somewhat noiser. Per-insertion flush is fairly smooth.
- it's a clear progression: higher bitwidth -> fewer new blocks created per insertion.
Note that due to the way our randomization is seeded, even though all these keys and values are "random",
they're deterministic between benchmark runs; and they're also the same on each benchmark between each parameter set.
(Larger b.N adds noise, but I've been running that with fixed amounts, e.g. -benchtime=10x or -benchtime=30x,
due to the fact using time as a parameter causes the larger values of "n" for scale to get a single run.)
I'm very curious about the blocks and bytes per entry being apparently completely unaffected by scale across multiple orders of magnitude.
Certainly we want those things to scale "well", for some definition of "well", but...
complete noise is... not what I expected. Wondering if there's a measurement bug here.
These are some interesting observations, but more numbers needed;
I'm not certain these are the right ones, and certainly they aren't the only interesting ones.
In particular I'm not sure there's enough probing of the layers of caching and flushing and how they affect observations;
there may be a lot more to do in order to produce useful insights thereabout.
// (The 'benchdraw' command alluded to here is https://github.com/cep21/benchdraw .)
96
+
84
97
// Histograms of blocksizes can be logged from some of the following functions, but are commented out.
85
98
// The main thing to check for in those is whether there are any exceptionally small blocks being produced:
86
99
// less than 64 bytes is a bit concerning because we assume there's some overhead per block in most operations (even if the exact amount may vary situationally).
0 commit comments