|
| 1 | +### Benchmarks module |
| 2 | + |
| 3 | +> Borrowed from https://github.com/apache/kafka/blob/trunk/jmh-benchmarks |
| 4 | +
|
| 5 | +This module contains benchmarks written using [JMH](https://openjdk.java.net/projects/code-tools/jmh/) from OpenJDK. |
| 6 | + |
| 7 | +### Running benchmarks |
| 8 | + |
| 9 | +If you want to set specific JMH flags or only run certain benchmarks, passing arguments via |
| 10 | +gradle tasks is cumbersome. These are simplified by the provided `jmh.sh` script. |
| 11 | + |
| 12 | +The default behavior is to run all benchmarks: |
| 13 | + |
| 14 | + ./benchmarks/jmh.sh |
| 15 | + |
| 16 | +Pass a pattern or name after the command to select the benchmarks: |
| 17 | + |
| 18 | + ./benchmarks/jmh.sh TransformBench |
| 19 | + |
| 20 | +Check which benchmarks that match the provided pattern: |
| 21 | + |
| 22 | + ./benchmarks/jmh.sh -l TransformBench |
| 23 | + |
| 24 | +Run a specific test and override the number of forks, iterations and warm-up iteration to `2`: |
| 25 | + |
| 26 | + ./benchmarks/jmh.sh -f 2 -i 2 -wi 2 TransformBench |
| 27 | + |
| 28 | +Run a specific test with async and GC profilers on Linux and flame graph output: |
| 29 | + |
| 30 | + ./benchmarks/jmh.sh -prof gc -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph TransformBench |
| 31 | + |
| 32 | +The following sections cover async profiler and GC profilers in more detail. |
| 33 | + |
| 34 | +### Using JMH with async profiler |
| 35 | + |
| 36 | +It's good practice to check profiler output for micro-benchmarks in order to verify that they represent the expected |
| 37 | +application behavior and measure what you expect to measure. Some example pitfalls include the use of expensive mocks |
| 38 | +or accidental inclusion of test setup code in the benchmarked code. JMH includes |
| 39 | +[async-profiler](https://github.com/jvm-profiling-tools/async-profiler) integration that makes this easy: |
| 40 | + |
| 41 | + ./benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so |
| 42 | + |
| 43 | +or if having async-profiler on environment variable `export LD_LIBRARY_PATH=/opt/async-profiler-2.9-linux-x64/build/` |
| 44 | + |
| 45 | + ./benchmarks/jmh.sh -prof async |
| 46 | + |
| 47 | +With flame graph output (the semicolon is escaped to ensure it is not treated as a command separator): |
| 48 | + |
| 49 | + ./benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so\;output=flamegraph |
| 50 | + |
| 51 | +Simultaneous cpu, allocation and lock profiling with async profiler 2.0 and jfr output (the semicolon is |
| 52 | +escaped to ensure it is not treated as a command separator): |
| 53 | + |
| 54 | + ./benchmarks/jmh.sh -prof async:libPath=/path/to/libasyncProfiler.so\;output=jfr\;alloc\;lock TransformBench |
| 55 | + |
| 56 | +A number of arguments can be passed to configure async profiler, run the following for a description: |
| 57 | + |
| 58 | + ./benchmarks/jmh.sh -prof async:help |
| 59 | + |
| 60 | +### Using JMH GC profiler |
| 61 | + |
| 62 | +It's good practice to run your benchmark with `-prof gc` to measure its allocation rate: |
| 63 | + |
| 64 | + ./benchmarks/jmh.sh -prof gc |
| 65 | + |
| 66 | +Of particular importance is the `norm` alloc rates, which measure the allocations per operation rather than allocations |
| 67 | +per second which can increase when you have make your code faster. |
| 68 | + |
| 69 | +### Running JMH outside gradle |
| 70 | + |
| 71 | +The JMH benchmarks can be run outside gradle as you would with any executable jar file: |
| 72 | + |
| 73 | + java -jar ./benchmarks/build/libs/kafka-benchmarks-*.jar -f2 TransformBench |
| 74 | + |
| 75 | +### Gradle Tasks |
| 76 | + |
| 77 | +If no benchmark mode is specified, the default is used which is throughput. It is assumed that users run |
| 78 | +the gradle tasks with `./gradlew` from the root of the Kafka project. |
| 79 | + |
| 80 | +* `benchmarks:shadowJar` - creates the uber jar required to run the benchmarks. |
| 81 | + |
| 82 | +* `benchmarks:jmh` - runs the `clean` and `shadowJar` tasks followed by all the benchmarks. |
| 83 | + |
| 84 | +### JMH Options |
| 85 | +Some common JMH options are: |
| 86 | + |
| 87 | +```text |
| 88 | +
|
| 89 | + -e <regexp+> Benchmarks to exclude from the run. |
| 90 | +
|
| 91 | + -f <int> How many times to fork a single benchmark. Use 0 to |
| 92 | + disable forking altogether. Warning: disabling |
| 93 | + forking may have detrimental impact on benchmark |
| 94 | + and infrastructure reliability, you might want |
| 95 | + to use different warmup mode instead. |
| 96 | +
|
| 97 | + -i <int> Number of measurement iterations to do. Measurement |
| 98 | + iterations are counted towards the benchmark score. |
| 99 | + (default: 1 for SingleShotTime, and 5 for all other |
| 100 | + modes) |
| 101 | +
|
| 102 | + -l List the benchmarks that match a filter, and exit. |
| 103 | +
|
| 104 | + -lprof List profilers, and exit. |
| 105 | +
|
| 106 | + -o <filename> Redirect human-readable output to a given file. |
| 107 | +
|
| 108 | + -prof <profiler> Use profilers to collect additional benchmark data. |
| 109 | + Some profilers are not available on all JVMs and/or |
| 110 | + all OSes. Please see the list of available profilers |
| 111 | + with -lprof. |
| 112 | +
|
| 113 | + -v <mode> Verbosity mode. Available modes are: [SILENT, NORMAL, |
| 114 | + EXTRA] |
| 115 | +
|
| 116 | + -wi <int> Number of warmup iterations to do. Warmup iterations |
| 117 | + are not counted towards the benchmark score. (default: |
| 118 | + 0 for SingleShotTime, and 5 for all other modes) |
| 119 | +``` |
| 120 | + |
| 121 | +To view all options run jmh with the -h flag. |
0 commit comments