1
1
[ ![ JetBrains incubator project] ( https://jb.gg/badges/incubator.svg )] ( https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub )
2
2
[ ![ GitHub license] ( https://img.shields.io/badge/license-Apache%20License%202.0-blue.svg?style=flat )] ( https://www.apache.org/licenses/LICENSE-2.0 )
3
3
4
- # Setting up
4
+ ` kotlinx.benchmark ` is a toolkit for running benchmarks for code written in Kotlin
5
+ and supporting all supported targets: JVM, JavaScript and Native.
5
6
6
- Add repository in ` settings.gradle ` to enable my bintray repository for plugin lookup
7
+ If you're familiar with [ JMH] ( https://openjdk.java.net/projects/code-tools/jmh/ ) , it is very similar and uses it under
8
+ the hoods to run benchmarks on JVM.
9
+
10
+ # Gradle plugin
11
+
12
+ Add repository in ` settings.gradle ` to enable bintray repository for plugin lookup
7
13
8
14
``` groovy
9
15
pluginManagement {
10
16
repositories {
11
- maven { url 'https://dl.bintray.com/orangy/maven ' }
17
+ maven { url 'https://dl.bintray.com/kotlin/kotlinx ' }
12
18
gradlePluginPortal()
13
19
}
14
20
}
15
21
```
16
22
23
+ TODO: This is not needed in latest Gradle versions.
17
24
If you are using it for Kotlin Multiplatform, enable metadata in ` settings.gradle ` :
18
25
19
26
``` groovy
20
27
enableFeaturePreview('GRADLE_METADATA')
21
28
```
22
29
30
+ TODO: Migrate to 1.3.40 node.js integrated support.
23
31
For Kotlin/JS code, add Node plugin as well:
24
32
25
33
``` groovy
@@ -32,34 +40,36 @@ node {
32
40
}
33
41
```
34
42
35
- For Kotlin/JVM code, add ` allopen ` plugin to make JMH happy:
43
+ For Kotlin/JVM code, add ` allopen ` plugin to make JMH happy. Alternatively, make all benchmark classes and methods ` open ` .
36
44
37
45
``` groovy
38
46
plugins {
39
- id 'org.jetbrains.kotlin.plugin.allopen' version "1.3.30 "
47
+ id 'org.jetbrains.kotlin.plugin.allopen' version "1.3.31 "
40
48
}
41
49
42
50
allOpen {
43
51
annotation("org.openjdk.jmh.annotations.State")
44
52
}
45
53
```
46
54
47
- # Adding multiplatform runtime library
55
+ # Runtime Library
48
56
49
- For JVM benchmarks you don't need anything, JMH core is added automatically.
50
- If you want to author multiplatform (especially common) benchmarks, you need a runtime library with small subset of
51
- annotations and code that will wire things up. The dependency is added automatically, but you need to add a repository
52
- so that Gradle can resolve this dependency.
57
+ You need a runtime library with annotations and code that will run benchmarks on JavaScript and Native platforms.
53
58
54
59
``` groovy
55
60
repositories {
56
- maven { url 'https://dl.bintray.com/orangy/maven' }
61
+ maven { url 'https://dl.bintray.com/kotlin/kotlinx' }
62
+ }
63
+
64
+ dependencies {
65
+ implementation "org.jetbrains.kotlinx:kotlinx.benchmark.runtime" version "0.2.0"
57
66
}
58
67
```
59
68
60
- # Configuring benchmark targets
69
+ # Configuration
61
70
62
71
In a ` build.gradle ` file create ` benchmark ` section, and inside it add a ` targets ` section.
72
+ In this section register all targets you want to run benchmarks from.
63
73
Example for multiplatform project:
64
74
65
75
``` groovy
@@ -82,17 +92,51 @@ benchmark {
82
92
}
83
93
```
84
94
85
- Configure benchmarks:
95
+ To configure benchmarks and create multiple profiles, create a ` configurations ` section in the ` benchmark ` block,
96
+ and place options inside. Toolkit creates ` main ` configuration by default, and you can create as many additional
97
+ configurations, as you need.
98
+
99
+
100
+ ``` groovy
101
+ benchmark {
102
+ configurations {
103
+ main {
104
+ // configure default configuration
105
+ }
106
+ smoke {
107
+ // create and configure "smoke" configuration, e.g. with several fast benchmarks to quickly check
108
+ // if code changes result in something very wrong, or very right.
109
+ }
110
+ }
111
+ }
112
+ ```
113
+
114
+ Available configuration options:
115
+
116
+ * ` iterations ` – number of measuring iterations
117
+ * ` warmups ` – number of warm up iterations
118
+ * ` iterationTime ` – time to run each iteration (measuring and warmup)
119
+ * ` iterationTimeUnit ` – time unit for ` iterationTime ` (default is seconds)
120
+ * ` outputTimeUnit ` – time unit for results output
121
+ * ` mode ` – "thrpt" for measuring operations per time, or "avgt" for measuring time per operation
122
+ * ` include("…") ` – regular expression to include benchmarks with fully qualified names matching it, as a substring
123
+ * ` exclude("…") ` – regular expression to exclude benchmarks with fully qualified names matching it, as a substring
124
+ * ` param("name", "value1", "value2") ` – specify a parameter for a public mutable property ` name ` annotated with ` @Param `
125
+
126
+ Time units can be NANOSECONDS, MICROSECONDS, MILLISECONDS, SECONDS, MINUTES, or their short variants such as "ms" or "ns".
127
+
128
+ Example:
86
129
87
130
``` groovy
88
131
benchmark {
89
132
// Create configurations
90
133
configurations {
91
134
main { // main configuration is created automatically, but you can change its defaults
135
+ warmups = 20 // number of warmup iterations
92
136
iterations = 10 // number of iterations
93
137
iterationTime = 3 // time in seconds per iteration
94
138
}
95
- fast {
139
+ smoke {
96
140
warmups = 5 // number of warmup iterations
97
141
iterations = 3 // number of iterations
98
142
iterationTime = 500 // time in seconds per iteration
@@ -113,8 +157,8 @@ benchmark {
113
157
register("native")
114
158
}
115
159
}
116
- ```
117
-
160
+ ```
161
+
118
162
# Separate source sets for benchmarks
119
163
120
164
Often you want to have benchmarks in the same project, but separated from main code, much like tests. Here is how:
0 commit comments