|
| 1 | +# Writing Benchmarks |
| 2 | + |
| 3 | +If you're familiar with the Java Microbenchmark Harness (JMH) toolkit, you'll find that the `kotlinx-benchmark` |
| 4 | +library shares a similar approach to crafting benchmarks. This compatibility allows you to seamlessly run your |
| 5 | +JMH benchmarks written in Kotlin on various platforms with minimal, if any, modifications. |
| 6 | + |
| 7 | +Like JMH, kotlinx-benchmark is annotation-based, meaning you configure benchmark execution behavior using annotations. |
| 8 | +The library then extracts metadata provided through annotations to generate code that benchmarks the specified code |
| 9 | +in the desired manner. |
| 10 | + |
| 11 | +To get started, let's examine a simple example of a multiplatform benchmark: |
| 12 | + |
| 13 | +```kotlin |
| 14 | +import kotlinx.benchmark.* |
| 15 | + |
| 16 | +@BenchmarkMode(Mode.Throughput) |
| 17 | +@OutputTimeUnit(TimeUnit.MILLISECONDS) |
| 18 | +@Warmup(iterations = 20, time = 1, timeUnit = TimeUnit.SECONDS) |
| 19 | +@Measurement(iterations = 20, time = 1, timeUnit = TimeUnit.SECONDS) |
| 20 | +@BenchmarkTimeUnit(TimeUnit.MILLISECONDS) |
| 21 | +@State(Scope.Benchmark) |
| 22 | +class ExampleBenchmark { |
| 23 | + |
| 24 | + @Param("4", "10") |
| 25 | + var size: Int = 0 |
| 26 | + |
| 27 | + private val list = ArrayList<Int>() |
| 28 | + |
| 29 | + @Setup |
| 30 | + fun prepare() { |
| 31 | + for (i in 0..<size) { |
| 32 | + list.add(i) |
| 33 | + } |
| 34 | + } |
| 35 | + |
| 36 | + @TearDown |
| 37 | + fun cleanup() { |
| 38 | + list.clear() |
| 39 | + } |
| 40 | + |
| 41 | + @Benchmark |
| 42 | + fun benchmarkMethod(): Int { |
| 43 | + return list.sum() |
| 44 | + } |
| 45 | +} |
| 46 | +``` |
| 47 | + |
| 48 | +**Example Description**: |
| 49 | +This example tests the speed of summing numbers in an `ArrayList`. We evaluate this operation with lists |
| 50 | +of 4 and 10 numbers to understand the method's performance with different list sizes. |
| 51 | + |
| 52 | +## Explaining the Annotations |
| 53 | + |
| 54 | +The following annotations are available to define and fine-tune your benchmarks. |
| 55 | + |
| 56 | +### @State |
| 57 | + |
| 58 | +The `@State` annotation is used to mark benchmark classes. |
| 59 | +In Kotlin/JVM, however, benchmark classes are not required to be annotated with `@State`. |
| 60 | + |
| 61 | +In Kotlin/JVM, the annotation specifies the extent to which the state object is shared among the worker threads, e.g, `@State(Scope.Group)`. |
| 62 | +Refer to [JMH documentation of Scope](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Scope.html) |
| 63 | +for details about available scopes. Multi-threaded execution of a benchmark method is not supported in other Kotlin targets, |
| 64 | +thus only `Scope.Benchmark` is available. |
| 65 | + |
| 66 | +Refer to [JMH documentation of @State](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/State.html) |
| 67 | +for details about the effect and restrictions of the annotation in Kotlin/JVM. |
| 68 | + |
| 69 | +In our snippet, the `ExampleBenchmark` class is marked with `@State(Scope.Benchmark)`, |
| 70 | +indicating that the performance of benchmark methods in this class should be measured. |
| 71 | + |
| 72 | +### @Setup |
| 73 | + |
| 74 | +The `@Setup` annotation is used to mark a method that sets up the necessary preconditions for your benchmark test. |
| 75 | +It serves as a preparatory step where you set up the environment for the benchmark. |
| 76 | + |
| 77 | +In Kotlin/JVM, you can specify when the setup method should be executed, e.g, `@Setup(Level.Iteration)`. |
| 78 | +Refer to [JMH documentation of Level](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) |
| 79 | +for details about available levels. In other targets, it operates always on the `Trial` level, meaning the setup method is |
| 80 | +executed once before the entire set of benchmark method iterations. |
| 81 | + |
| 82 | +The key point to remember is that the `@Setup` method's execution time is not included in the final benchmark |
| 83 | +results - the timer starts only when the `@Benchmark` method begins. This makes `@Setup` an ideal place |
| 84 | +for initialization tasks that should not impact the timing results of your benchmark. |
| 85 | + |
| 86 | +Refer to [JMH documentation of @Setup](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Setup.html) |
| 87 | +for details about the effect and restrictions of the annotation in Kotlin/JVM. |
| 88 | + |
| 89 | +In the provided example, the `@Setup` annotation is used to populate an `ArrayList` with integers from `0` up to a specified `size`. |
| 90 | + |
| 91 | +### @TearDown |
| 92 | + |
| 93 | +The `@TearDown` annotation is used to denote a method that's executed after the benchmarking method(s). |
| 94 | +This method is typically responsible for cleaning up or deallocating any resources or conditions that were initialized in the `@Setup` method. |
| 95 | + |
| 96 | +In Kotlin/JVM, you can specify when the teardown method should be executed, e.g, `@TearDown(Level.Iteration)`. |
| 97 | +Refer to [JMH documentation of Level](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) |
| 98 | +for details about available levels. In other targets, it operates always on `Trial` level, meaning the teardown method |
| 99 | +is executed once after the entire set of benchmark method iterations. |
| 100 | + |
| 101 | +The `@TearDown` annotation helps you avoid performance bias and ensures the proper maintenance of resources and the |
| 102 | +preparation of a clean environment for the next run. As with the `@Setup` method, the `@TearDown` method's |
| 103 | +execution time is not included in the final benchmark results. |
| 104 | + |
| 105 | +Refer to [JMH documentation of @TearDown](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/TearDown.html) |
| 106 | +for details about the effect and restrictions of the annotation in Kotlin/JVM. |
| 107 | + |
| 108 | +In our example, the `cleanup` function annotated with `@TearDown` is used to clear our `ArrayList`. |
| 109 | + |
| 110 | +### @Benchmark |
| 111 | + |
| 112 | +The `@Benchmark` annotation is used to specify the methods that you want to measure the performance of. |
| 113 | +It's the actual test you're running. The code you want to benchmark goes inside this method. |
| 114 | +All other annotations are used to control different things in measuring operations of benchmark methods. |
| 115 | + |
| 116 | +Benchmark methods may include only a single [Blackhole](#blackhole) type as an argument, or have no arguments at all. |
| 117 | +It's important to note that in Kotlin/JVM benchmark methods must always be `public`. |
| 118 | +Refer to [JMH documentation of @Benchmark](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Benchmark.html) |
| 119 | +for details about restrictions for benchmark methods in Kotlin/JVM. |
| 120 | + |
| 121 | +In our example, the `benchmarkMethod` function is annotated with `@Benchmark`, |
| 122 | +which means the toolkit will measure the performance of the operation of summing all the integers in the list. |
| 123 | + |
| 124 | +### @BenchmarkMode |
| 125 | + |
| 126 | +The `@BenchmarkMode` annotation sets the mode of operation for the benchmark. |
| 127 | + |
| 128 | +Applying the `@BenchmarkMode` annotation requires specifying a mode from the `Mode` enum. |
| 129 | +In Kotlin/JVM, the `Mode` enum has several options, including `SingleShotTime`. |
| 130 | + |
| 131 | +Refer to [JMH documentation of Mode](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Mode.html) |
| 132 | +for details about available options. In other targets, only `Throughput` and `AverageTime` are available. |
| 133 | +`Mode.Throughput` measures the raw throughput of your code in terms of the number of operations it can perform per unit |
| 134 | +of time, such as operations per second. `Mode.AverageTime` is used when you're more interested in the average time it |
| 135 | +takes to execute an operation. Without an explicit `@BenchmarkMode` annotation, the toolkit defaults to `Mode.Throughput`. |
| 136 | + |
| 137 | +The annotation is put at the enclosing class and has the effect over all `@Benchmark` methods in the class. |
| 138 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 139 | +Refer to [JMH documentation of @BenchmarkMode](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/BenchmarkMode.html) |
| 140 | +for details about the effect of the annotation in Kotlin/JVM. |
| 141 | + |
| 142 | +In our example, `@BenchmarkMode(Mode.Throughput)` is used, meaning the benchmark focuses on the number of times the |
| 143 | +benchmark method can be executed per unit of time. |
| 144 | + |
| 145 | +### @OutputTimeUnit |
| 146 | + |
| 147 | +The `@OutputTimeUnit` annotation specifies the time unit in which your results will be presented. |
| 148 | +This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds, |
| 149 | +presenting the result in milliseconds or microseconds provides a more accurate and detailed measurement. |
| 150 | +Conversely, for operations with longer execution times, you might choose to display the output in microseconds, seconds, or even minutes. |
| 151 | +Essentially, the `@OutputTimeUnit` annotation enhances the readability and interpretability of benchmark results. |
| 152 | +If this annotation isn't specified, it defaults to using seconds as the time unit. |
| 153 | + |
| 154 | +The annotation is put at the enclosing class and has the effect over all `@Benchmark` methods in the class. |
| 155 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 156 | +Refer to [JMH documentation of @OutputTimeUnit](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/OutputTimeUnit.html) |
| 157 | +for details about the effect of the annotation in Kotlin/JVM. |
| 158 | + |
| 159 | +In our example, the `@OutputTimeUnit` is set to milliseconds. |
| 160 | + |
| 161 | +### @Warmup |
| 162 | + |
| 163 | +The `@Warmup` annotation specifies a preliminary phase before the actual benchmarking takes place. |
| 164 | +During this warmup phase, the code in your `@Benchmark` method is executed several times, but these runs aren't included |
| 165 | +in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its |
| 166 | +optimal performance state so that the results of measurement iterations are more stable. |
| 167 | + |
| 168 | +The annotation is put at the enclosing class and have the effect over all `@Benchmark` methods in the class. |
| 169 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 170 | +Refer to [JMH documentation of @Warmup](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Warmup.html) |
| 171 | +for details about the effect of the annotation in Kotlin/JVM. |
| 172 | + |
| 173 | +In our example, the `@Warmup` annotation is used to allow 20 iterations, each lasting one second, |
| 174 | +of executing the benchmark method before the actual measurement starts. |
| 175 | + |
| 176 | +### @Measurement |
| 177 | + |
| 178 | +The `@Measurement` annotation controls the properties of the actual benchmarking phase. |
| 179 | +It sets how many iterations the benchmark method is run and how long each run should last. |
| 180 | +The results from these runs are recorded and reported as the final benchmark results. |
| 181 | + |
| 182 | +The annotation is put at the enclosing class and have the effect over all `@Benchmark` methods in the class. |
| 183 | +In Kotlin/JVM, it may be put at `@Benchmark` method to have effect on that method only. |
| 184 | +Refer to [JMH documentation of @Measurement](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Measurement.html) |
| 185 | +for details about the effect of the annotation in Kotlin/JVM. |
| 186 | + |
| 187 | +In our example, the `@Measurement` annotation specifies that the benchmark method will be run 20 iterations |
| 188 | +for a duration of one second for the final performance measurement. |
| 189 | + |
| 190 | +### @Param |
| 191 | + |
| 192 | +The `@Param` annotation is used to pass different parameters to your benchmark method. |
| 193 | +It allows you to run the same benchmark method with different input values, so you can see how these variations affect |
| 194 | +performance. The values you provide for the `@Param` annotation are the different inputs you want to use in your |
| 195 | +benchmark test. The benchmark will run once for each provided value. |
| 196 | + |
| 197 | +The property marked with this annotation must be mutable (`var`) and not `private.` |
| 198 | +Additionally, only properties of primitive types or the `String` type can be annotated with `@Param`. |
| 199 | +Refer to [JMH documentation of @Param](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Param.html) |
| 200 | +for details about the effect and restrictions of the annotation in Kotlin/JVM. |
| 201 | + |
| 202 | +In our example, the `@Param` annotation is used with values `"4"` and `"10"`, meaning the `benchmarkMethod` |
| 203 | +will be benchmarked twice - once with the `size` value set to `4` and then with `10`. |
| 204 | +This approach helps in understanding how the input list's size affects the time taken to sum its integers. |
| 205 | + |
| 206 | +### Other JMH annotations |
| 207 | + |
| 208 | +In Kotlin/JVM, you can use annotations provided by JMH to further tune your benchmarks execution behavior. |
| 209 | +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/package-summary.html) |
| 210 | +for available annotations. |
| 211 | + |
| 212 | +## Blackhole |
| 213 | + |
| 214 | +Modern compilers often eliminate computations they find unnecessary, which can distort benchmark results. |
| 215 | +In essence, `Blackhole` maintains the integrity of benchmarks by preventing unwanted optimizations such as dead-code |
| 216 | +elimination by the compiler or the runtime virtual machine. A `Blackhole` is used when the benchmark produces several values. |
| 217 | +If the benchmark produces a single value, just return it. It will be implicitly consumed by a `Blackhole`. |
| 218 | + |
| 219 | +### How to Use Blackhole: |
| 220 | + |
| 221 | +Inject `Blackhole` into your benchmark method and use it to consume results of your computations: |
| 222 | + |
| 223 | +```kotlin |
| 224 | +@Benchmark |
| 225 | +fun iterateBenchmark(bh: Blackhole) { |
| 226 | + for (e in myList) { |
| 227 | + bh.consume(e) |
| 228 | + } |
| 229 | +} |
| 230 | +``` |
| 231 | + |
| 232 | +By consuming results, you signal to the compiler that these computations are significant and shouldn't be optimized away. |
| 233 | + |
| 234 | +For a deeper dive into `Blackhole` and its nuances in JVM, you can refer to: |
| 235 | +- [Official Javadocs](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.23/org/openjdk/jmh/infra/Blackhole.html) |
| 236 | +- [JMH](https://github.com/openjdk/jmh/blob/1.37/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254) |
0 commit comments