Skip to content
This repository was archived by the owner on Sep 30, 2023. It is now read-only.

Commit 76d38ba

Browse files
authored
node and browser benchmark runner (#32)
* fix: correct npm package name * expose gc as a function * install isNode * add src/benchmarker.js * add needed bind for timeout callback * rework benchmarker into http client/server with cli * add commander to dependencies * fix program options usage * fix cli ls exec * make child process exec async * use node-fetch instead of whatwg-fetch polyfill * change benchmarker http-client addMetric api * fix benchmarker server create and _handleResults * logging over websockets * remove unneeded timeout * write result json files to results folder * remove whatwg-fetch polyfill * benchmarker better logging * results over websockets * error logging and use stdout instead of console.log * remove http from benchmarker server * prep for browser support * install webpack middleware and express * install html-webpack-plugin * install val-loader * downgrade webpack to v4 * downgrade html-webpack-plugin to support webpack4 * downgrade val-loader * add newlines for cancel/complete status * manual browser support (no puppeteer yet) * automate browser benchmarks with puppeteer * removed uneeded promisify * use fork over exec in benchmarker cli * move webpack to its own thread * ensure split works for large ls return * cleanup and minor changes * use open ports, remove port argument * install ws websocket server * create results dir path on results * move server variable into runBenchmarks * fix typo * add bundling... message * use console.log for log messages * fix window.performance.memory accuracy * use already opened browser page * add puppeteer to deps * move default metrics to separate file * get ready to build fixtures * edit package deps and npm audit fix * ignore fixtures dir * refactor benchmarker; all working but reports * remove old benchmark runner files * edit deps; commit package and package-lock * base working * add log-load benchmark * change benchmark setting * use execBenchmarkPath variable * stop using fixtures * more report outputs * add ordered benchmarks * add process-results and report util file * get percent change for time metric * remove getLabel from process-results * make dir for output path * no fixtures or hard coded port; small cleanup * move Report component to parent dir * change option order * optionally track mem/cpu * benchmarks path param for cli * add catch to webpackServer call * add benchmarking... console message * fix webpack-server * remove local benchmarks * no written output by default * remove --no-output option * fix avg processed metric * change benchmarker server variable name * reuse webpack port for indexedDb * change baselines option flag * reword opt description; change -b default * add basic usage to README.md * remove fixtures from gitignore * remove tests for now * remove runPlace leftover from run.js * browser name property for execBenchmarks * static webpack port * make reporter/process-results.js more readable * fix: webpack-entry use run func again * change tempdir name for benchmark runner * remove outdated comment * small style edit cli.js * check for baseline path exist * add baseline comparison example * add basic end2end and cli option tests * add docs on creating benchmarks * add 30 sec timeout to tests * support benchmark hooks * console report spacing and negative array length Co-authored-by: tabcat <>
1 parent a7bb89f commit 76d38ba

31 files changed

+18927
-3232
lines changed

README.md

Lines changed: 23 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,34 @@
66
77
## Install
88

9-
`npm i benchmark-runner`
9+
`npm i orbit-db-benchmark-runner`
1010

11-
## Usage
11+
## CLI Usage
1212

13-
TBD
13+
Check [cli.js](./src/cli.js) or use `npx benchmarker -h` for help
1414

15-
## Testing
15+
*If you want to run benchmarks in a folder have their file name end in `.benchmark.js`.*
1616

17-
Mocha is used as the testing framework, SinonJS for stubs and mocks and ChaiJS for assertions. To run tests:
17+
```
18+
Options:
19+
-V, --version output the version number
20+
-b, --benchmarks <path> benchmark folder or file (default: "./benchmarks")
21+
-o, --output <file path> report output path (.html or .json)
22+
-i, --baselines <path> baselines to use for comparison (.json output)
23+
--no-node skip nodejs benchmarks
24+
--no-browser skip browser benchmarks
25+
```
1826

19-
`npm run test`
27+
##### Running Comparisons
28+
29+
1. Create the baseline report output to use for comparison: `npx benchmarker -o report.json`
30+
2. Use the output baseline report with the baseline option: `npx benchmarker -i report.json`
31+
32+
***benchmarks ran for comparison are best ran on their own machine or a machine with few other things happening in the background***
33+
34+
## Writing Benchmarks
35+
36+
Benchmark files must export an object with an asynchronous method `benchmark`. The method takes 1 parameter `benchmarker` which is used to control the recording and give information about the benchmark. Please see [test.benchmark.js]('./test/fixtures/benchmarks/test.benchmark.js') for an example.
2037

2138
## Contributing
2239

package-lock.json

Lines changed: 17871 additions & 2652 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 30 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,43 @@
44
"description": "OrbitDB Benchmark Runner",
55
"main": "./src/index.js",
66
"bin": {
7-
"benchmark-runner": "./src/cli.js"
7+
"benchmarker": "./src/cli.js"
88
},
99
"scripts": {
1010
"test": "nyc mocha"
1111
},
1212
"author": "mistakia",
1313
"license": "MIT",
1414
"dependencies": {
15-
"expose-gc": "^1.0.0",
16-
"yargs": "^15.4.1"
15+
"@babel/core": "^7.13.10",
16+
"@babel/preset-env": "^7.13.12",
17+
"@babel/preset-react": "^7.12.13",
18+
"@nivo/core": "^0.67.0",
19+
"@nivo/line": "^0.67.0",
20+
"babel-loader": "^8.2.2",
21+
"bootstrap": "^4.6.0",
22+
"commander": "^7.1.0",
23+
"css-loader": "^5.1.3",
24+
"express": "^4.17.1",
25+
"html-webpack-plugin": "^4.5.2",
26+
"inline-assets-html-plugin": "^1.0.0",
27+
"is-node": "^1.0.2",
28+
"puppeteer": "^8.0.0",
29+
"react": "^17.0.2",
30+
"react-bootstrap": "^1.5.2",
31+
"react-dom": "^17.0.2",
32+
"style-loader": "^2.0.0",
33+
"val-loader": "^2.1.2",
34+
"webpack": "^4.46.0",
35+
"webpack-dev-middleware": "^4.1.0",
36+
"ws": "^7.4.4"
37+
},
38+
"devDependencies": {
39+
"ipfs": "^0.54.4",
40+
"mocha": "^8.3.2",
41+
"nyc": "^15.1.0",
42+
"orbit-db": "^0.26.1",
43+
"standard": "^14.3.4"
1744
},
1845
"localMaintainers": [
1946
"hajamark <[email protected]>",
@@ -32,13 +59,6 @@
3259
"homepage": "https://github.com/orbitdb/benchmark-runner#readme",
3360
"bugs": "https://github.com/orbitdb/benchmark-runner/issues",
3461
"repository": "github:orbitdb/benchmark-runner",
35-
"devDependencies": {
36-
"chai": "^4.2.0",
37-
"mocha": "^8.1.3",
38-
"nyc": "^15.1.0",
39-
"sinon": "^9.0.3",
40-
"standard": "^14.3.4"
41-
},
4262
"standard": {
4363
"env": "mocha"
4464
}

src/benchmarker/client.js

Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
'use strict'
2+
const isNode = require('is-node')
3+
const nodeDir = (dir) => require('path').join(dir, 'node')
4+
const getWebSocket = () => isNode
5+
? require('ws')
6+
: window.WebSocket
7+
const { makeId, withInfo, creators } = require('./ws-action')
8+
const {
9+
timeMetric,
10+
cpuUsageMetric,
11+
memoryUsedMetric,
12+
memoryTotalMetric
13+
} = require('./metrics')
14+
15+
class Benchmarker {
16+
constructor (ws, dir) {
17+
this._ws = ws
18+
this.dir = isNode ? nodeDir(dir) : dir
19+
this._timeout = null
20+
21+
this.isNode = isNode
22+
this.id = makeId()
23+
this.info = {
24+
id: this.id,
25+
name: `benchmark-${this.id}`,
26+
env: isNode ? 'node' : 'browser',
27+
metrics: []
28+
}
29+
this._interval = 1000 // record metrics every this many ms
30+
31+
this.metrics = []
32+
this.addMetric(timeMetric)
33+
}
34+
35+
static async create (host, dir) {
36+
const ws = await new Promise(resolve => {
37+
const ws = new (getWebSocket())(`ws://${host}`)
38+
ws.onopen = () => resolve(ws)
39+
})
40+
return new Benchmarker(ws, dir)
41+
}
42+
43+
async close () {
44+
if (this._ws.readyState !== 3) {
45+
await new Promise(resolve => {
46+
this._ws.onclose = () => resolve()
47+
this._ws.close()
48+
})
49+
}
50+
}
51+
52+
trackMemory () {
53+
this.addMetric(memoryUsedMetric)
54+
this.addMetric(memoryTotalMetric)
55+
}
56+
57+
trackCpu () {
58+
if (isNode) this.addMetric(cpuUsageMetric)
59+
}
60+
61+
addMetric ({ name, get }) {
62+
if (this.info.metrics.includes(name)) {
63+
throw new Error('a metric with that name already exists')
64+
}
65+
if (this._timeout) {
66+
throw new Error('metrics have already started being recorded')
67+
}
68+
this.metrics.push({ name, get })
69+
this.info.metrics = this.metrics.map(m => m.name)
70+
}
71+
72+
setInterval (interval) {
73+
if (typeof interval !== 'number') {
74+
throw new Error('interval must be a number')
75+
}
76+
if (this._timeout) {
77+
throw new Error('metrics have already started being recorded')
78+
}
79+
this._interval = interval
80+
}
81+
82+
setBenchmarkName (name) {
83+
this.info.name = name.toString()
84+
}
85+
86+
setHookInfo (info) {
87+
this.info.hook = info
88+
}
89+
90+
log (msg) {
91+
this._sendAction(creators.LOG(msg))
92+
}
93+
94+
_sendAction (action) {
95+
this._ws.send(JSON.stringify(withInfo(this.info)(action)))
96+
}
97+
98+
_recordMetrics () {
99+
this._sendAction(creators.SEGMENT(this.metrics.map(({ get }) => get())))
100+
}
101+
102+
startRecording () {
103+
if (!this._timeout) {
104+
const interval = this._interval
105+
const repeater = () => {
106+
this._recordMetrics()
107+
this._timeout = setTimeout(repeater.bind(this), interval)
108+
}
109+
repeater()
110+
}
111+
}
112+
113+
stopRecording () {
114+
clearTimeout(this._timeout)
115+
this._timeout = null
116+
this._recordMetrics()
117+
}
118+
}
119+
120+
module.exports = Benchmarker

src/benchmarker/metrics/index.js

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
'use strict'
2+
const isNode = require('is-node')
3+
const useMetricState = (state, get) => () => {
4+
const { newState, next } = get(state)
5+
state = newState
6+
return next
7+
}
8+
9+
const timeMetric = {
10+
name: 'time',
11+
get: useMetricState(0, (state) => {
12+
const now = Date.now()
13+
return {
14+
newState: state || now,
15+
next: now - (state || now) // on first metric sample: now - now, aka 0
16+
}
17+
})
18+
}
19+
20+
const ns2ms = (ms) => ms / 1000
21+
const cpuUsageMetric = {
22+
name: 'cpu usage',
23+
get: useMetricState(undefined, (state) => {
24+
const time = Date.now()
25+
const { user, system } = process.cpuUsage()
26+
const total = ns2ms(user) + ns2ms(system)
27+
return {
28+
newState: { total, time },
29+
next: state
30+
// cpu usage to percent
31+
? Math.round(100 * ((total - state.total) / (time - state.time)))
32+
: 0
33+
}
34+
})
35+
}
36+
37+
const memorySample = () => {
38+
const sample = isNode
39+
? process.memoryUsage()
40+
: window.performance.memory
41+
const memory = {
42+
total: null,
43+
used: null
44+
}
45+
// denominated in bytes
46+
if (isNode) {
47+
memory.total = sample.heapTotal
48+
memory.used = sample.heapUsed
49+
} else {
50+
memory.total = sample.totalJSHeapSize
51+
memory.used = sample.usedJSHeapSize
52+
}
53+
return memory
54+
}
55+
const toMegabytes = (bytes) => bytes / 1000000
56+
const memoryUsedMetric = {
57+
name: 'heap used',
58+
get: () => toMegabytes(memorySample().used)
59+
}
60+
const memoryTotalMetric = {
61+
name: 'heap total',
62+
get: () => toMegabytes(memorySample().total)
63+
}
64+
65+
module.exports = {
66+
useMetricState,
67+
timeMetric,
68+
cpuUsageMetric,
69+
memoryUsedMetric,
70+
memoryTotalMetric
71+
}

src/benchmarker/server.js

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
'use strict'
2+
const WebSocket = require('ws')
3+
const { parse, types } = require('./ws-action')
4+
const logMessage = (id, msg) =>
5+
`benchmark id:${id}
6+
${msg}
7+
`
8+
9+
class BenchmarkerServer {
10+
constructor ({ port } = {}) {
11+
this._wss = new WebSocket.Server({ port: port || 0 })
12+
this._wss.on('connection', this._handleWsConnection.bind(this))
13+
this.address = this._wss.address.bind(this._wss)
14+
this.results = {}
15+
}
16+
17+
static create (opts) { return new BenchmarkerServer(opts) }
18+
19+
async _handleWsConnection (ws) {
20+
ws.on('message', m => {
21+
const { info, type, msg } = parse(m)
22+
switch (type) {
23+
case types.LOG:
24+
console.log(logMessage(info.id, msg))
25+
break
26+
case types.SEGMENT: {
27+
const { name, env } = info
28+
if (!this.results[name]) this.results[name] = {}
29+
if (!this.results[name][env]) this.results[name][env] = info
30+
if (!this.results[name][env].recorded) this.results[name][env].recorded = []
31+
this.results[name][env].recorded.push(msg)
32+
break
33+
}
34+
}
35+
})
36+
}
37+
}
38+
39+
module.exports = BenchmarkerServer

src/benchmarker/ws-action.js

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
'use strict'
2+
3+
const action = {}
4+
5+
action.types = {
6+
LOG: 'LOG',
7+
SEGMENT: 'SEGMENT'
8+
}
9+
10+
action.creators = {
11+
[action.types.LOG]: (msg) =>
12+
({ type: action.types.LOG, msg }),
13+
[action.types.SEGMENT]: (msg) =>
14+
({ type: action.types.SEGMENT, msg })
15+
}
16+
17+
action.makeId = () => Date.now()
18+
action.withInfo = (info) => (action) => ({ info, ...action })
19+
20+
action.parse = (action) => JSON.parse(action)
21+
22+
module.exports = action

0 commit comments

Comments
 (0)