|
| 1 | +# Mastering kotlinx-benchmark Configuration |
| 2 | + |
| 3 | +This is a comprehensive guide to configuration options that help fine-tune your benchmarking setup to suit your specific needs. |
| 4 | + |
| 5 | +## The `configurations` Section |
| 6 | + |
| 7 | +The `configurations` section of the `benchmark` block serves as the control center for setting the parameters of your benchmark profiles. The library provides a default configuration profile named "main", which can be configured according to your needs just like any other profile. Here's a basic structure of how configurations can be set up: |
| 8 | + |
| 9 | +```kotlin |
| 10 | +// build.gradle.kts |
| 11 | +benchmark { |
| 12 | + configurations { |
| 13 | + register("smoke") { |
| 14 | + // Configure this configuration profile here |
| 15 | + } |
| 16 | + // here you can create additional profiles |
| 17 | + } |
| 18 | +} |
| 19 | +``` |
| 20 | + |
| 21 | +## Understanding Configuration Profiles |
| 22 | + |
| 23 | +Configuration profiles dictate the execution pattern of benchmarks: |
| 24 | + |
| 25 | +- Utilize `include` and `exclude` options to select specific benchmarks for a profile. By default, every benchmark is included. |
| 26 | +- Each configuration profile translates to a task in the `kotlinx-benchmark` Gradle plugin. For instance, the task `smokeBenchmark` is tailored to run benchmarks based on the `"smoke"` configuration profile. For an overview of tasks, refer to [tasks-overview.md](tasks-overview.md). |
| 27 | + |
| 28 | +## Core Configuration Options |
| 29 | + |
| 30 | +Note that values defined in the build script take precedence over those specified by annotations in the code. |
| 31 | + |
| 32 | +| Option | Description | Possible Values | Corresponding Annotation | |
| 33 | +|-------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|-------------------------------------------------------| |
| 34 | +| `iterations` | Sets the number of iterations for measurements. | Positive Integer | `@Measurement(iterations: Int, ...)` | |
| 35 | +| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Non-negative Integer | `@Warmup(iterations: Int)` | |
| 36 | +| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Positive Integer | `@Measurement(..., time: Int, ...)` | |
| 37 | +| `iterationTimeUnit` | Defines the unit for `iterationTime`. | Time unit, see below | `@Measurement(..., timeUnit: BenchmarkTimeUnit, ...)` | |
| 38 | +| `outputTimeUnit` | Sets the unit for the results display. | Time unit, see below | `@OutputTimeUnit(value: BenchmarkTimeUnit)` | |
| 39 | +| `mode` | Selects "thrpt" (Throughput) for measuring the number of function calls per unit time or "avgt" (AverageTime) for measuring the time per function call. | `"thrpt"`, `"Throughput"`, `"avgt"`, `"AverageTime"` | `@BenchmarkMode(value: Mode)` | |
| 40 | +| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - | |
| 41 | +| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - | |
| 42 | +| `param("name", "value1", "value2")` | Assigns values to a public mutable property with the specified name, annotated with `@Param`. | String values that represent valid values for the property | `@Param` | |
| 43 | +| `reportFormat` | Defines the benchmark report's format options. | `"json"`(default), `"csv"`, `"scsv"`, `"text"` | - | |
| 44 | + |
| 45 | +The following values can be used for specifying time unit: |
| 46 | +- "NANOSECONDS", "ns", "nanos" |
| 47 | +- "MICROSECONDS", "us", "micros" |
| 48 | +- "MILLISECONDS", "ms", "millis" |
| 49 | +- "SECONDS", "s", "sec" |
| 50 | +- "MINUTES", "m", "min" |
| 51 | + |
| 52 | +## Platform-Specific Configuration Options |
| 53 | + |
| 54 | +The options listed in the following sections allow you to tailor the benchmark execution behavior for specific platforms: |
| 55 | + |
| 56 | +### Kotlin/Native |
| 57 | +| Option | Description | Possible Values | Default Value | |
| 58 | +|-----------------------------------------------|------------------------------------------------------------------------------------------------------------------------|------------------------------------|------------------| |
| 59 | +| `advanced("nativeFork", "value")` | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | `"perBenchmark"`, `"perIteration"` | `"perBenchmark"` | |
| 60 | +| `advanced("nativeGCAfterIteration", value)` | Whether to trigger garbage collection after each iteration. | `true`, `false` | `false` | |
| 61 | + |
| 62 | +### Kotlin/JVM |
| 63 | +| Option | Description | Possible Values | Default Value | |
| 64 | +|---------------------------------------------|------------------------------------------------------------|----------------------------------------|----------------| |
| 65 | +| `advanced("jvmForks", value)` | Specifies the number of times the harness should fork. | Non-negative Integer, `"definedByJmh"` | `1` | |
| 66 | + |
| 67 | +**Notes on "jvmForks":** |
| 68 | +- **0** - "no fork", i.e., no subprocesses are forked to run benchmarks. |
| 69 | +- A positive integer value – the amount used for all benchmarks in this configuration. |
| 70 | +- **"definedByJmh"** – Let JMH determine the amount, using the value in the [`@Fork` annotation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Fork.html) for the benchmark function or its enclosing class. If not specified by `@Fork`, it defaults to [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS). |
| 71 | + |
| 72 | +The library offers the flexibility to specify the version of the Java Microbenchmark Harness (JMH) to use when running benchmarks on the JVM. |
| 73 | +The default version is set to `1.21`, but you can customize it while registering a JVM target for benchmarking: |
| 74 | + |
| 75 | +```kotlin |
| 76 | +benchmark { |
| 77 | + targets { |
| 78 | + register("jvmBenchmarks") { |
| 79 | + this as JvmBenchmarkTarget |
| 80 | + jmhVersion = "1.36" |
| 81 | + } |
| 82 | + } |
| 83 | +} |
| 84 | +``` |
| 85 | + |
| 86 | +Alternatively, you can utilize the project property `benchmarks_jmh_version` to achieve the same effect. |
| 87 | + |
| 88 | +### Kotlin/JS & Kotlin/Wasm |
| 89 | +| Option | Description | Possible Values | Default Value | |
| 90 | +|-----------------------------------------------|-------------------------------------------------------------------------------------------------------|-----------------|---------------| |
| 91 | +| `advanced("jsUseBridge", value)` | Generate special benchmark bridges to stop inlining optimizations. | `true`, `false` | `true` | |
| 92 | + |
| 93 | +**Note:** In the Kotlin/JS target, the "jsUseBridge" option only takes effect when the `BuiltIn` benchmark executor is selected. |
| 94 | + |
| 95 | +By default, kotlinx-benchmark employs the `benchmark.js` library for running benchmarks in Kotlin/JS. |
| 96 | +However, users have the option to select the library's built-in benchmarking implementation: |
| 97 | + |
| 98 | +```kotlin |
| 99 | +benchmark { |
| 100 | + targets { |
| 101 | + register("jsBenchmarks") { |
| 102 | + this as JsBenchmarkTarget |
| 103 | + jsBenchmarksExecutor = JsBenchmarksExecutor.BuiltIn |
| 104 | + } |
| 105 | + } |
| 106 | +} |
| 107 | +``` |
0 commit comments