@@ -13,31 +13,34 @@ To get started, let's examine a simple example of a multiplatform benchmark:
13
13
``` kotlin
14
14
import kotlinx.benchmark.*
15
15
16
- @BenchmarkMode(Mode .Throughput )
17
- @OutputTimeUnit(TimeUnit .MILLISECONDS )
18
- @Warmup(iterations = 20 , time = 1 , timeUnit = TimeUnit .SECONDS )
19
- @Measurement(iterations = 20 , time = 1 , timeUnit = TimeUnit .SECONDS )
20
- @BenchmarkTimeUnit(TimeUnit .MILLISECONDS )
16
+ @BenchmarkMode(Mode .AverageTime )
17
+ @OutputTimeUnit(BenchmarkTimeUnit .MILLISECONDS )
18
+ @Warmup(iterations = 10 , time = 500 , timeUnit = BenchmarkTimeUnit .MILLISECONDS )
19
+ @Measurement(iterations = 20 , time = 1 , timeUnit = BenchmarkTimeUnit .SECONDS )
21
20
@State(Scope .Benchmark )
22
21
class ExampleBenchmark {
23
22
23
+ // Parameterizes the benchmark to run with different list sizes
24
24
@Param(" 4" , " 10" )
25
25
var size: Int = 0
26
26
27
27
private val list = ArrayList <Int >()
28
28
29
+ // Prepares the test environment before each benchmark run
29
30
@Setup
30
31
fun prepare () {
31
32
for (i in 0 .. < size) {
32
33
list.add(i)
33
34
}
34
35
}
35
36
37
+ // Cleans up resources after each benchmark run
36
38
@TearDown
37
39
fun cleanup () {
38
40
list.clear()
39
41
}
40
42
43
+ // The actual benchmark method
41
44
@Benchmark
42
45
fun benchmarkMethod (): Int {
43
46
return list.sum()
@@ -55,29 +58,30 @@ The following annotations are available to define and fine-tune your benchmarks.
55
58
56
59
### @State
57
60
58
- The ` @State ` annotation is used to mark benchmark classes.
59
- In Kotlin/JVM, however, benchmark classes are not required to be annotated with ` @State ` .
61
+ The ` @State ` annotation specifies the extent to which the state object is shared among the worker threads,
62
+ and it is mandatory for benchmark classes to be marked with this annotation to define their scope of state sharing .
60
63
61
- In Kotlin/JVM, the annotation specifies the extent to which the state object is shared among the worker threads, e.g, ` @State(Scope.Group) ` .
64
+ Currently, multi-threaded execution of a benchmark method is supported only on the JVM, where you can specify various scopes .
62
65
Refer to [ JMH documentation of Scope] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Scope.html )
63
- for details about available scopes. Multi-threaded execution of a benchmark method is not supported in other Kotlin targets,
64
- thus only ` Scope.Benchmark ` is available .
66
+ for details about available scopes and their implications.
67
+ In non-JVM targets, only ` Scope.Benchmark ` is applicable .
65
68
69
+ When writing JVM-only benchmarks, benchmark classes are not required to be annotated with ` @State ` .
66
70
Refer to [ JMH documentation of @State ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/State.html )
67
71
for details about the effect and restrictions of the annotation in Kotlin/JVM.
68
72
69
- In our snippet, the ` ExampleBenchmark ` class is marked with ` @State(Scope.Benchmark) ` ,
70
- indicating that the performance of benchmark methods in this class should be measured .
73
+ In our snippet, the ` ExampleBenchmark ` class is annotated with ` @State(Scope.Benchmark) ` ,
74
+ indicating the state is shared across all worker threads .
71
75
72
76
### @Setup
73
77
74
- The ` @Setup ` annotation is used to mark a method that sets up the necessary preconditions for your benchmark test.
75
- It serves as a preparatory step where you set up the environment for the benchmark.
78
+ The ` @Setup ` annotation marks a method that sets up the necessary preconditions for your benchmark test.
79
+ It serves as a preparatory step where you initiate the benchmark environment .
76
80
77
- In Kotlin/JVM, you can specify when the setup method should be executed, e.g, ` @Setup(Level.Iteration) ` .
81
+ The setup method is executed once before the entire set of iterations for a benchmark method begins.
82
+ In Kotlin/JVM, you can specify when the setup method should be executed, e.g., ` @Setup(Level.Iteration) ` .
78
83
Refer to [ JMH documentation of Level] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html )
79
- for details about available levels. In other targets, it operates always on the ` Trial ` level, meaning the setup method is
80
- executed once before the entire set of benchmark method iterations.
84
+ for details about available levels in Kotlin/JVM.
81
85
82
86
The key point to remember is that the ` @Setup ` method's execution time is not included in the final benchmark
83
87
results - the timer starts only when the ` @Benchmark ` method begins. This makes ` @Setup ` an ideal place
@@ -90,28 +94,28 @@ In the provided example, the `@Setup` annotation is used to populate an `ArrayLi
90
94
91
95
### @TearDown
92
96
93
- The ` @TearDown ` annotation is used to denote a method that's executed after the benchmarking method(s) .
94
- This method is typically responsible for cleaning up or deallocating any resources or conditions that were initialized in the ` @Setup ` method.
97
+ The ` @TearDown ` annotation is used to denote a method that resets and cleans up the benchmarking environment .
98
+ It is chiefly responsible for the cleanup or deallocation of resources and conditions set up in the ` @Setup ` method.
95
99
96
- In Kotlin/JVM, you can specify when the teardown method should be executed, e.g, ` @TearDown(Level.Iteration) ` .
100
+ The teardown method is executed once after the entire iteration set of a benchmark method.
101
+ In Kotlin/JVM, you can specify when the teardown method should be executed, e.g., ` @TearDown(Level.Iteration) ` .
97
102
Refer to [ JMH documentation of Level] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html )
98
- for details about available levels. In other targets, it operates always on ` Trial ` level, meaning the teardown method
99
- is executed once after the entire set of benchmark method iterations.
103
+ for details about available levels in Kotlin/JVM.
100
104
101
- The ` @TearDown ` annotation helps you avoid performance bias and ensures the proper maintenance of resources and the
102
- preparation of a clean environment for the next run. As with the ` @Setup ` method, the ` @TearDown ` method's
103
- execution time is not included in the final benchmark results.
105
+ The ` @TearDown ` annotation is crucial for avoiding performance bias, ensuring the proper maintenance of resources,
106
+ and preparing a clean environment for the next run. Similar to the ` @Setup ` method, the execution time of the
107
+ ` @TearDown ` method is not included in the final benchmark results.
104
108
105
109
Refer to [ JMH documentation of @TearDown ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/TearDown.html )
106
- for details about the effect and restrictions of the annotation in Kotlin/JVM.
110
+ for more information on the effect and restrictions of the annotation in Kotlin/JVM.
107
111
108
112
In our example, the ` cleanup ` function annotated with ` @TearDown ` is used to clear our ` ArrayList ` .
109
113
110
114
### @Benchmark
111
115
112
116
The ` @Benchmark ` annotation is used to specify the methods that you want to measure the performance of.
113
117
It's the actual test you're running. The code you want to benchmark goes inside this method.
114
- All other annotations are used to control different things in measuring operations of benchmark methods .
118
+ All other annotations are employed to configure the benchmark's environment and execution .
115
119
116
120
Benchmark methods may include only a single [ Blackhole] ( #blackhole ) type as an argument, or have no arguments at all.
117
121
It's important to note that in Kotlin/JVM benchmark methods must always be ` public ` .
@@ -126,30 +130,29 @@ which means the toolkit will measure the performance of the operation of summing
126
130
The ` @BenchmarkMode ` annotation sets the mode of operation for the benchmark.
127
131
128
132
Applying the ` @BenchmarkMode ` annotation requires specifying a mode from the ` Mode ` enum.
129
- In Kotlin/JVM, the ` Mode ` enum has several options, including ` SingleShotTime ` .
130
-
131
- Refer to [ JMH documentation of Mode] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Mode.html )
132
- for details about available options. In other targets, only ` Throughput ` and ` AverageTime ` are available.
133
133
` Mode.Throughput ` measures the raw throughput of your code in terms of the number of operations it can perform per unit
134
134
of time, such as operations per second. ` Mode.AverageTime ` is used when you're more interested in the average time it
135
135
takes to execute an operation. Without an explicit ` @BenchmarkMode ` annotation, the toolkit defaults to ` Mode.Throughput ` .
136
+ In Kotlin/JVM, the ` Mode ` enum has a few more options, including ` SingleShotTime ` .
137
+ Refer to [ JMH documentation of Mode] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Mode.html )
138
+ for details about available options in Kotlin/JVM.
136
139
137
140
The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
138
141
In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
139
142
Refer to [ JMH documentation of @BenchmarkMode ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/BenchmarkMode.html )
140
143
for details about the effect of the annotation in Kotlin/JVM.
141
144
142
- In our example, ` @BenchmarkMode(Mode.Throughput ) ` is used, meaning the benchmark focuses on the number of times the
143
- benchmark method can be executed per unit of time .
145
+ In our example, ` @BenchmarkMode(Mode.AverageTime ) ` is used, indicating that the benchmark aims to measure the
146
+ average execution time of the benchmark method .
144
147
145
148
### @OutputTimeUnit
146
149
147
150
The ` @OutputTimeUnit ` annotation specifies the time unit in which your results will be presented.
148
151
This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds,
149
- presenting the result in milliseconds or microseconds provides a more accurate and detailed measurement.
150
- Conversely, for operations with longer execution times, you might choose to display the output in microseconds , seconds, or even minutes.
152
+ presenting the result in nanoseconds or microseconds provides a more accurate and detailed measurement.
153
+ Conversely, for operations with longer execution times, you might choose to display the output in milliseconds , seconds, or even minutes.
151
154
Essentially, the ` @OutputTimeUnit ` annotation enhances the readability and interpretability of benchmark results.
152
- If this annotation isn't specified, it defaults to using seconds as the time unit .
155
+ By default, if the annotation is not specified, results are presented in seconds.
153
156
154
157
The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
155
158
In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
@@ -165,27 +168,27 @@ During this warmup phase, the code in your `@Benchmark` method is executed sever
165
168
in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its
166
169
optimal performance state so that the results of measurement iterations are more stable.
167
170
168
- The annotation is put at the enclosing class and have the effect over all ` @Benchmark ` methods in the class.
171
+ The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
169
172
In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
170
173
Refer to [ JMH documentation of @Warmup ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Warmup.html )
171
174
for details about the effect of the annotation in Kotlin/JVM.
172
175
173
- In our example, the ` @Warmup ` annotation is used to allow 20 iterations, each lasting one second,
174
- of executing the benchmark method before the actual measurement starts .
176
+ In our example, the ` @Warmup ` annotation is used to allow 10 iterations of executing the benchmark method before
177
+ the actual measurement starts. Each iteration lasts 500 milliseconds .
175
178
176
179
### @Measurement
177
180
178
181
The ` @Measurement ` annotation controls the properties of the actual benchmarking phase.
179
182
It sets how many iterations the benchmark method is run and how long each run should last.
180
183
The results from these runs are recorded and reported as the final benchmark results.
181
184
182
- The annotation is put at the enclosing class and have the effect over all ` @Benchmark ` methods in the class.
185
+ The annotation is put at the enclosing class and has the effect over all ` @Benchmark ` methods in the class.
183
186
In Kotlin/JVM, it may be put at ` @Benchmark ` method to have effect on that method only.
184
187
Refer to [ JMH documentation of @Measurement ] ( https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Measurement.html )
185
188
for details about the effect of the annotation in Kotlin/JVM.
186
189
187
- In our example, the ` @Measurement ` annotation specifies that the benchmark method will be run 20 iterations
188
- for a duration of one second for the final performance measurement.
190
+ In our example, the ` @Measurement ` annotation specifies that the benchmark method will run 20 iterations,
191
+ with each iteration lasting one second, for the final performance measurement.
189
192
190
193
### @Param
191
194
@@ -213,7 +216,7 @@ for available annotations.
213
216
214
217
Modern compilers often eliminate computations they find unnecessary, which can distort benchmark results.
215
218
In essence, ` Blackhole ` maintains the integrity of benchmarks by preventing unwanted optimizations such as dead-code
216
- elimination by the compiler or the runtime virtual machine. A ` Blackhole ` is used when the benchmark produces several values.
219
+ elimination by the compiler or the runtime virtual machine. A ` Blackhole ` should be used when the benchmark produces several values.
217
220
If the benchmark produces a single value, just return it. It will be implicitly consumed by a ` Blackhole ` .
218
221
219
222
### How to Use Blackhole:
@@ -232,5 +235,5 @@ fun iterateBenchmark(bh: Blackhole) {
232
235
By consuming results, you signal to the compiler that these computations are significant and shouldn't be optimized away.
233
236
234
237
For a deeper dive into ` Blackhole ` and its nuances in JVM, you can refer to:
235
- - [ Official Javadocs] ( https://javadoc.io/static /org.openjdk.jmh/jmh-core/1.23 /org/openjdk/jmh/infra/Blackhole.html )
238
+ - [ Official Javadocs] ( https://javadoc.io/doc /org.openjdk.jmh/jmh-core/latest /org/openjdk/jmh/infra/Blackhole.html )
236
239
- [ JMH] ( https://github.com/openjdk/jmh/blob/1.37/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254 )
0 commit comments