Skip to content

Commit c4077b3

Browse files
authored
Add harnesses to profile with stackprof and vernier (#316)
* Add harnesses to profile with stackprof and vernier Auto install the gems outside of any gems the benchmark might require. Save the file to the data dir and show a report when finished. * Document additional harnesses in README * Comment profiling harness code
1 parent f9147a3 commit c4077b3

File tree

4 files changed

+192
-1
lines changed

4 files changed

+192
-1
lines changed

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,10 +151,13 @@ This file will then be passed to the underlying Ruby interpreter with
151151
You can find several test harnesses in this repository:
152152

153153
* harness - the normal default harness, with duration controlled by warmup iterations and time/count limits
154-
* harness-perf - a simplified harness that runs for exactly the hinted number of iterations
155154
* harness-bips - a harness that measures iterations/second until stable
156155
* harness-continuous - a harness that adjusts the batch sizes of iterations to run in stable iteration size batches
156+
* harness-once - a simplified harness that simply runs once
157+
* harness-perf - a simplified harness that runs for exactly the hinted number of iterations
158+
* harness-stackprof - a harness to profile the benchmark with stackprof
157159
* harness-stats - count method calls and loop iterations
160+
* harness-vernier - a harness to profile the benchmark with vernier
158161
* harness-warmup - a harness which runs as long as needed to find warmed up (peak) performance
159162

160163
To use it, run a benchmark script directly, specifying a harness directory with `-I`:

harness-stackprof/harness.rb

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# frozen_string_literal: true
2+
3+
# Profile the benchmark (ignoring initialization code) with stackprof.
4+
# Customize stackprof options with an env var of STACKPROF_OPTS='key:value,...'.
5+
# Usage:
6+
# STACKPROF_OPTS='mode:object' MIN_BENCH_TIME=0 MIN_BENCH_ITRS=1 ruby -v -I harness-stackprof benchmarks/.../benchmark.rb
7+
# STACKPROF_OPTS='mode:cpu,interval:10' MIN_BENCH_TIME=1 MIN_BENCH_ITRS=10 ruby -v -I harness-stackprof benchmarks/.../benchmark.rb
8+
9+
require_relative "../harness/harness-common"
10+
require_relative "../harness/harness-extra"
11+
12+
ensure_global_gem("stackprof")
13+
14+
# Default to collecting more information so that more post-processing options are
15+
# available (like generating a flamegraph).
16+
DEFAULTS = {
17+
aggregate: true,
18+
raw: true,
19+
}.freeze
20+
21+
# Convert strings of "true" or "false" to their actual boolean values (or raise).
22+
BOOLS = {"true" => true, "false" => false}
23+
def bool!(val)
24+
case val
25+
when TrueClass, FalseClass
26+
# Respect values that are already booleans so that we can specify defaults intuitively.
27+
val
28+
else
29+
BOOLS.fetch(val) { raise ArgumentError, "must be 'true' or 'false'" }
30+
end
31+
end
32+
33+
# Parse the string of "key:value,..." into a hash that we can pass to stackprof.
34+
def parse_opts_string(str)
35+
return {} unless str
36+
37+
str.split(/,/).map { |x| x.strip.split(/[=:]/, 2) }.to_h.transform_keys(&:to_sym)
38+
end
39+
40+
# Get options for stackprof from env var and convert strings to the types stackprof expects.
41+
def stackprof_opts
42+
opts = DEFAULTS.merge(parse_opts_string(ENV['STACKPROF_OPTS']))
43+
44+
bool = method(:bool!)
45+
46+
# Use {key: conversion_proc_or_sym} to convert present options to their necessary types.
47+
{
48+
aggregate: bool,
49+
raw: bool,
50+
mode: :to_sym,
51+
interval: :to_i,
52+
}.each do |key, method|
53+
next unless opts.key?(key)
54+
55+
method = proc(&method) if method.is_a?(Symbol)
56+
opts[key] = method.call(opts[key])
57+
rescue => error
58+
raise ArgumentError, "Option '#{key}' failed to convert: #{error}"
59+
end
60+
61+
opts
62+
end
63+
64+
def run_benchmark(n, &block)
65+
require "stackprof"
66+
67+
opts = stackprof_opts
68+
prefix = "stackprof"
69+
prefix = "#{prefix}-#{opts[:mode]}" if opts[:mode]
70+
71+
out = output_file_path(prefix: prefix, ext: "dump")
72+
StackProf.run(out: out, **opts) do
73+
run_enough_to_profile(n, &block)
74+
end
75+
76+
# Show the basic textual report.
77+
gem_exe("stackprof", "--text", out)
78+
# Print the file path at the end to make it easy to copy the file name
79+
# and use it for further analysis.
80+
puts "Stackprof dump file:\n#{out}"
81+
82+
# Dummy results to satisfy ./run_benchmarks.rb
83+
return_results([0], [1.0]) if ENV['RESULT_JSON_PATH']
84+
end

harness-vernier/harness.rb

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# frozen_string_literal: true
2+
3+
# Profile the benchmark (ignoring initialization code) using vernier and display the profile.
4+
# Set NO_VIERWER=1 to disable automatically opening the profile in a browser.
5+
# Usage:
6+
# MIN_BENCH_TIME=1 MIN_BENCH_ITRS=1 ruby -v -I harness-vernier benchmarks/...
7+
# NO_VIEWER=1 MIN_BENCH_TIME=1 MIN_BENCH_ITRS=1 ruby -v -I harness-vernier benchmarks/...
8+
9+
require_relative "../harness/harness-common"
10+
require_relative "../harness/harness-extra"
11+
12+
ensure_global_gem("vernier")
13+
ensure_global_gem_exe("profile-viewer")
14+
15+
def run_benchmark(n, &block)
16+
require "vernier"
17+
18+
out = output_file_path(ext: "json")
19+
Vernier.profile(out: out) do
20+
run_enough_to_profile(n, &block)
21+
end
22+
23+
puts "Vernier profile:\n#{out}"
24+
gem_exe("profile-viewer", out) unless ENV['NO_VIEWER'] == '1'
25+
26+
# Dummy results to satisfy ./run_benchmarks.rb
27+
return_results([0], [1.0])
28+
end

harness/harness-extra.rb

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
# frozen_string_literal: true
2+
3+
# Ensure a gem is installed globally (and add it to the load path)
4+
# in a way that doesn't interfere with the benchmark's bundler setup.
5+
def ensure_global_gem(name)
6+
found = Gem.find_latest_files(name).first
7+
unless found
8+
Gem.install(name)
9+
found = Gem.find_latest_files(name).first
10+
end
11+
warn "Adding to load path: #{File.dirname(found)}"
12+
$LOAD_PATH << File.dirname(found)
13+
end
14+
15+
# Ensure an executable provided by the gem is present
16+
# (useful for profile-viewer which has no lib, only the exe).
17+
def ensure_global_gem_exe(name, exe = name)
18+
Gem.bin_path(name, exe)
19+
rescue Gem::GemNotFoundException
20+
Gem.install(name)
21+
end
22+
23+
# Call a gem exe, removing any bundler env vars that might confuse it.
24+
def gem_exe(*args)
25+
system({'RUBYOPT' => '', 'BUNDLER_SETUP' => nil}, *args)
26+
end
27+
28+
# Get benchmark base name from the file path.
29+
def benchmark_name
30+
$0.match(%r{([^/]+?)(?:(?:/benchmark)?\.rb)?$})[1]
31+
end
32+
33+
# Get name of harness (stackprof, vernier, etc) from the file path of the loaded harness.
34+
def harness_name
35+
$LOADED_FEATURES.reverse_each do |feat|
36+
if m = feat.match(%r{/harness-([^/]+)/harness\.rb$})
37+
return m[1]
38+
end
39+
end
40+
raise "Unable to determine harness name"
41+
end
42+
43+
# Share a single timestamp for everything from this execution.
44+
TIMESTAMP = Time.now.strftime('%F-%H%M%S')
45+
46+
# Create a consistent file path in the data directory
47+
# so that the data can be further analyzed.
48+
def output_file_path(prefix: harness_name, suffix: benchmark_name, ruby_info: ruby_version_info, timestamp: TIMESTAMP, ext: "bin")
49+
File.expand_path("../data/#{prefix}-#{timestamp}-#{ruby_info}-#{suffix}.#{ext}", __dir__)
50+
end
51+
52+
# Can we get the benchmark config name from somewhere?
53+
def ruby_version_info
54+
"#{RUBY_ENGINE}-#{RUBY_ENGINE_VERSION}"
55+
end
56+
57+
def get_time
58+
Process.clock_gettime(Process::CLOCK_MONOTONIC)
59+
end
60+
61+
MIN_BENCH_TIME = Integer(ENV.fetch('MIN_BENCH_TIME', 10))
62+
63+
# Ensure the benchmark runs enough times for profilers to get sufficient data when sampling.
64+
# Use the "n" hint (provided by the benchmarks themselves) as a starting point
65+
# but allow that to be overridden by MIN_BENCH_ITRS env var.
66+
# Also use MIN_BENCH_TIME to loop until the benchmark has run for a sufficient duration.
67+
def run_enough_to_profile(n, &block)
68+
start = get_time
69+
loop do
70+
# Allow MIN_BENCH_ITRS to override the argument.
71+
n = ENV.fetch('MIN_BENCH_ITRS', n).to_i
72+
n.times(&block)
73+
74+
break if (get_time - start) >= MIN_BENCH_TIME
75+
end
76+
end

0 commit comments

Comments
 (0)