Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
f207e27
[ES|QL] Remove implicit `limit` appended to each subquery branch (#13…
fang-xing-esql Jan 14, 2026
5c046f6
ESQL: Added more null aggs tests on RATE (#140585)
ivancea Jan 14, 2026
8c69439
Make METADATA _tier attribute snapshot only (#140578)
idegtiarenko Jan 14, 2026
aa0f003
[ES|QL] Refactor inference operator architecture for multi-value fiel…
afoucret Jan 14, 2026
f16f955
Fix updatecli configuration for updating Iron Bank docker images (#14…
jozala Jan 14, 2026
c4fe99d
Drop CPS `project_routing` param for search (#140640)
pawankartik-elastic Jan 14, 2026
1ac4f67
ES|QL: Add exponential_histogram merge aggregator tests (#140563)
JonasKunz Jan 14, 2026
e0a15c4
ESQL: Enable nullify and fail unmapped resolution in tech-preview (#1…
GalLalouche Jan 14, 2026
7f7fdec
PromQL: only accept children that return a range vector in cross seri…
felixbarny Jan 14, 2026
50f032c
PromQL: support for top-level binary operators (#140541)
felixbarny Jan 14, 2026
0d0a9b2
[TEST] Don't inject boundary tuples for rate calculation when one exi…
kkrik-es Jan 14, 2026
77f5fd7
Mute org.elasticsearch.xpack.esql.qa.single_node.EsqlSpecIT test {csv…
elasticsearchmachine Jan 14, 2026
d20ffd4
[ES|QL] Text embedding function GA (#140555)
afoucret Jan 14, 2026
1bab457
Mute org.elasticsearch.xpack.logsdb.RandomizedRollingUpgradeIT testIn…
elasticsearchmachine Jan 14, 2026
1ef764f
Mute org.elasticsearch.xpack.logsdb.RandomizedRollingUpgradeIT testIn…
elasticsearchmachine Jan 14, 2026
ed365c2
Test snapshot with synthetic id (#140458)
burqen Jan 14, 2026
72779d3
Unmute + add logging for CrossClusterCancellationIT.testCancelSkipUna…
smalyshev Jan 14, 2026
6cc84ea
Mute org.elasticsearch.xpack.esql.heap_attack.HeapAttackSubqueryIT te…
elasticsearchmachine Jan 14, 2026
bf787fa
Mute org.elasticsearch.datastreams.TSDBSyntheticIdsIT testRecoveredOp…
elasticsearchmachine Jan 14, 2026
5d707e6
Update the way inference test service compute rerank score: use a has…
Jan 9, 2026
d2ab99d
Implement rerank using muli-value fields.
Jan 14, 2026
b7e695c
Make GetInferenceFieldsAction an Indices Action (#140399)
Mikep86 Jan 14, 2026
e78a783
Update docs/changelog/140672.yaml
afoucret Jan 14, 2026
ad29c09
Mute org.elasticsearch.xpack.esql.parser.SetParserTests testSetUnmapp…
elasticsearchmachine Jan 14, 2026
a233680
Finalize release notes for v9.1.10 release (#140601)
elasticsearchmachine Jan 14, 2026
9d7ff18
Finalize release notes for v9.2.4 release (#140599)
elasticsearchmachine Jan 14, 2026
812006d
Deduplicate Inference Failures in ShardBulkInferenceActionFilter (#14…
Mikep86 Jan 14, 2026
f054060
Add a callout for the 200M limit to max_primary_shard_docs (#140625)
dakrone Jan 14, 2026
a26808f
Removing useIlm method from MachineLearningExtension (#140128)
masseyke Jan 14, 2026
59f3a6e
Add realistic data pattern benchmarks for TSDB codec (#140390)
salvatore-campagna Jan 14, 2026
d773b74
Migrate WildcardRollingUpgradeIT to run in serverless (#140618)
Kubik42 Jan 14, 2026
c30cf6a
ESQL: Unmute GenerativeMetrics test (#140608)
limotova Jan 14, 2026
a234734
Docs typo - fixing URL resolution (#140616)
jilldoty-elastic Jan 14, 2026
241060f
Unmute tests that were failing because of swisshash table feature fla…
martijnvg Jan 14, 2026
9cef550
[DiskBBQ] Extract and allow increased postings prefetch (#140314)
benwtrent Jan 14, 2026
a6a3573
Increase ensureGreenTimeout on RandomizedRollingUpgradeIT (#140688)
jordan-powers Jan 15, 2026
2810c47
Make balancer threshold configurable per tier (#140632)
nicktindall Jan 15, 2026
9d99472
Update wolfi (versioned) (#140569)
elastic-renovate-prod[bot] Jan 15, 2026
fed49b1
Vector functions are GA (#140545)
carlosdelest Jan 15, 2026
8dc8eda
Count FC calls during ESQL index resolution (#140576)
idegtiarenko Jan 15, 2026
9531ad8
Additional ESQL tests (#140590)
idegtiarenko Jan 15, 2026
7d71a46
chore: deps(updatecli): bump "ghcr.io/updatecli/policies/autodiscover…
jozala Jan 15, 2026
a97c399
deps: Bump ironbank version to 9.7 (#140703)
jozala Jan 15, 2026
8f6e07f
Fix a test.
Jan 15, 2026
4671732
Fix a CSV tests.
Jan 15, 2026
b2736fc
[Docs] ES|QL - Add exact search tutorial (#140552)
carlosdelest Jan 15, 2026
1f0d939
Allow to use an expression without naming it in RERANK.
Jan 15, 2026
9889803
Fix 0 padding in SharedBytes#copy method (#140588)
albertzaharovits Jan 15, 2026
5b8e24f
ES|QL: Prune fork branches with empty results (#140593)
ioanatia Jan 15, 2026
d4d7753
Add a test cases to use rerank with snippets.
Jan 15, 2026
0a92a0c
Handle different boolean clause count for IndexOrDocValuesQuery and I…
iverase Jan 15, 2026
a434650
Improve ES|QL rate performance for high-cardinality using local circu…
JonasKunz Jan 15, 2026
5c16a38
Fixing bad field name in tests.
Jan 15, 2026
eaf4b55
Merge branch 'main' into esql-mv-rerank
afoucret Jan 15, 2026
dd2f6ca
Upgrade zstd to version 1.5.7 (#140530)
parkertimmins Jan 15, 2026
33d2912
Add new IndexVersion for reading .si files from memory (#138279)
blerer Jan 15, 2026
9f07289
Add more info in `testTimeSeriesQuerying` (#140716)
gmarouli Jan 15, 2026
9e9d277
Fix testReadBlobWithReadTimeouts retries count (#139999)
mhl-b Jan 15, 2026
54e1ab7
Small fix on tests.
Jan 15, 2026
6668004
Feature/promql add integration tests batch4 (#140560)
sidosera Jan 15, 2026
76e9a5a
Snapshot shutdown progress tracker test fix (#139447)
joshua-adams-1 Jan 15, 2026
f4ea30e
Enable extended doc values parameters feature flag for ESQL tests (#1…
jordan-powers Jan 15, 2026
ffbbb39
Refactor: Use single constant for default exponential histogram bucke…
JonasKunz Jan 15, 2026
a5ef68d
Update the doc.
Jan 15, 2026
a7d8bdd
ESQL: allow empty results (#139181)
idegtiarenko Jan 15, 2026
628dbe3
Remove all usages of TransportVersionUtils.randomVersionBetween (#140…
mark-vieira Jan 15, 2026
ac66d2f
Mute org.elasticsearch.xpack.esql.optimizer.PhysicalPlanOptimizerTest…
elasticsearchmachine Jan 15, 2026
5cda4dd
Mute org.elasticsearch.xpack.esql.optimizer.PhysicalPlanOptimizerTest…
elasticsearchmachine Jan 15, 2026
c5bcd6c
Mute org.elasticsearch.xpack.esql.optimizer.PhysicalPlanOptimizerTest…
elasticsearchmachine Jan 15, 2026
2d91f07
Mute org.elasticsearch.snapshots.SnapshotShutdownIT testRemoveNodeDur…
elasticsearchmachine Jan 15, 2026
ed858e9
Mute org.elasticsearch.snapshots.SnapshotShutdownIT testRemoveNodeDur…
elasticsearchmachine Jan 15, 2026
9b651a9
Mute org.elasticsearch.snapshots.SnapshotShutdownIT testStartRemoveNo…
elasticsearchmachine Jan 15, 2026
e805b9e
Mute org.elasticsearch.snapshots.SnapshotShutdownIT testAbortSnapshot…
elasticsearchmachine Jan 15, 2026
127d38f
Mute org.elasticsearch.snapshots.SnapshotShutdownIT testShutdownWhile…
elasticsearchmachine Jan 15, 2026
4c2d69d
Mute org.elasticsearch.snapshots.SnapshotShutdownIT testSnapshotShutd…
elasticsearchmachine Jan 15, 2026
8075f88
Mute org.elasticsearch.compute.aggregation.AllLastBytesRefByTimestamp…
elasticsearchmachine Jan 15, 2026
499d368
Add known issue for upgrading to 9.2.4 (#140738)
rjernst Jan 15, 2026
1c23ba4
Store flattened field data in binary doc values (#140246)
jordan-powers Jan 15, 2026
1335684
Merge branch 'main' into esql-mv-rerank
afoucret Jan 15, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/updatecli/values.d/ironbank.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
config:
- path: distribution/docker/src/docker/iron_bank
dockerfile: ../Dockerfile
dockerfile: ../dockerfiles/ironbank/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@
@State(Scope.Benchmark)
public class DecodeConstantIntegerBenchmark {
private static final int SEED = 17;
private static final int BLOCK_SIZE = 128;

@Param({ "1", "4", "8", "9", "16", "17", "24", "25", "32", "33", "40", "48", "56", "57", "64" })
private int bitsPerValue;
Expand All @@ -59,12 +58,12 @@ public void setupInvocation() throws IOException {

@Setup(Level.Trial)
public void setupTrial() throws IOException {
decode.setupTrial(new ConstantIntegerSupplier(SEED, bitsPerValue, BLOCK_SIZE));
decode.setupTrial(new ConstantIntegerSupplier(SEED, bitsPerValue, decode.getBlockSize()));
}

@Benchmark
public void throughput(Blackhole bh, ThroughputMetrics metrics) throws IOException {
decode.benchmark(bh);
metrics.recordOperation(BLOCK_SIZE, decode.getEncodedSize());
metrics.recordOperation(decode.getBlockSize(), decode.getEncodedSize());
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/

package org.elasticsearch.benchmark.index.codec.tsdb;

import org.elasticsearch.benchmark.index.codec.tsdb.internal.AbstractTSDBCodecBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.CounterWithResetsSupplier;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.DecodeBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.ThroughputMetrics;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;

import java.io.IOException;
import java.util.concurrent.TimeUnit;

/**
* Benchmark for decoding counter-with-resets data patterns.
*
* <p>Parameterized by resetProbability to test how reset frequency affects
* decoding performance. Lower probability means longer monotonic runs between
* resets, while higher probability creates more frequent jumps back to zero.
*/
@Fork(value = 1)
@Warmup(iterations = 3)
@Measurement(iterations = 5)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.SECONDS)
@State(Scope.Benchmark)
public class DecodeCounterWithResetsBenchmark {
private static final int SEED = 17;

@Param({ "0.01", "0.02", "0.05" })
private double resetProbability;

private final AbstractTSDBCodecBenchmark decode;

public DecodeCounterWithResetsBenchmark() {
this.decode = new DecodeBenchmark();
}

@Setup(Level.Invocation)
public void setupInvocation() throws IOException {
decode.setupInvocation();
}

@Setup(Level.Trial)
public void setupTrial() throws IOException {
decode.setupTrial(CounterWithResetsSupplier.builder(SEED, decode.getBlockSize()).withResetProbability(resetProbability).build());
}

@Benchmark
public void throughput(Blackhole bh, ThroughputMetrics metrics) throws IOException {
decode.benchmark(bh);
metrics.recordOperation(decode.getBlockSize(), decode.getEncodedSize());
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@
@State(Scope.Benchmark)
public class DecodeDecreasingIntegerBenchmark {
private static final int SEED = 17;
private static final int BLOCK_SIZE = 128;

@Param({ "1", "4", "8", "9", "16", "17", "24", "25", "32", "33", "40", "48", "56", "57", "64" })
private int bitsPerValue;
Expand All @@ -59,12 +58,12 @@ public void setupInvocation() throws IOException {

@Setup(Level.Trial)
public void setupTrial() throws IOException {
decode.setupTrial(new DecreasingIntegerSupplier(SEED, bitsPerValue, BLOCK_SIZE));
decode.setupTrial(new DecreasingIntegerSupplier(SEED, bitsPerValue, decode.getBlockSize()));
}

@Benchmark
public void throughput(Blackhole bh, ThroughputMetrics metrics) throws IOException {
decode.benchmark(bh);
metrics.recordOperation(BLOCK_SIZE, decode.getEncodedSize());
metrics.recordOperation(decode.getBlockSize(), decode.getEncodedSize());
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/

package org.elasticsearch.benchmark.index.codec.tsdb;

import org.elasticsearch.benchmark.index.codec.tsdb.internal.AbstractTSDBCodecBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.DecodeBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.GaugeLikeSupplier;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.ThroughputMetrics;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;

import java.io.IOException;
import java.util.concurrent.TimeUnit;

/**
* Benchmark for decoding gauge-like data patterns.
*
* <p>Parameterized by varianceRatio to test how different fluctuation intensities
* affect decoding performance. Lower variance means values stay closer to the baseline,
* while higher variance creates more volatile oscillations.
*/
@Fork(value = 1)
@Warmup(iterations = 3)
@Measurement(iterations = 5)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.SECONDS)
@State(Scope.Benchmark)
public class DecodeGaugeLikeBenchmark {
private static final int SEED = 17;

@Param({ "0.05", "0.1", "0.2" })
private double varianceRatio;

private final AbstractTSDBCodecBenchmark decode;

public DecodeGaugeLikeBenchmark() {
this.decode = new DecodeBenchmark();
}

@Setup(Level.Invocation)
public void setupInvocation() throws IOException {
decode.setupInvocation();
}

@Setup(Level.Trial)
public void setupTrial() throws IOException {
decode.setupTrial(GaugeLikeSupplier.builder(SEED, decode.getBlockSize()).withVarianceRatio(varianceRatio).build());
}

@Benchmark
public void throughput(Blackhole bh, ThroughputMetrics metrics) throws IOException {
decode.benchmark(bh);
metrics.recordOperation(decode.getBlockSize(), decode.getEncodedSize());
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/

package org.elasticsearch.benchmark.index.codec.tsdb;

import org.elasticsearch.benchmark.index.codec.tsdb.internal.AbstractTSDBCodecBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.DecodeBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.GcdFriendlySupplier;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.ThroughputMetrics;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;

import java.io.IOException;
import java.util.concurrent.TimeUnit;

/**
* Benchmark for decoding GCD-friendly data patterns.
*
* <p>Parameterized by GCD value to test how the GCD compression stage handles
* different divisors: 1 (no GCD benefit), small primes (7, 127), powers of 2
* (64, 1024), and common values (100, 1000).
*/
@Fork(value = 1)
@Warmup(iterations = 3)
@Measurement(iterations = 5)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.SECONDS)
@State(Scope.Benchmark)
public class DecodeGcdFriendlyBenchmark {
private static final int SEED = 17;

@Param({ "1", "7", "64", "100", "127", "1000", "1024" })
private long gcd;

private final AbstractTSDBCodecBenchmark decode;

public DecodeGcdFriendlyBenchmark() {
this.decode = new DecodeBenchmark();
}

@Setup(Level.Invocation)
public void setupInvocation() throws IOException {
decode.setupInvocation();
}

@Setup(Level.Trial)
public void setupTrial() throws IOException {
decode.setupTrial(GcdFriendlySupplier.builder(SEED, decode.getBlockSize()).withGcd(gcd).build());
}

@Benchmark
public void throughput(Blackhole bh, ThroughputMetrics metrics) throws IOException {
decode.benchmark(bh);
metrics.recordOperation(decode.getBlockSize(), decode.getEncodedSize());
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@
@State(Scope.Benchmark)
public class DecodeIncreasingIntegerBenchmark {
private static final int SEED = 17;
private static final int BLOCK_SIZE = 128;

@Param({ "1", "4", "8", "9", "16", "17", "24", "25", "32", "33", "40", "48", "56", "57", "64" })
private int bitsPerValue;
Expand All @@ -59,12 +58,12 @@ public void setupInvocation() throws IOException {

@Setup(Level.Trial)
public void setupTrial() throws IOException {
decode.setupTrial(new IncreasingIntegerSupplier(SEED, bitsPerValue, BLOCK_SIZE));
decode.setupTrial(new IncreasingIntegerSupplier(SEED, bitsPerValue, decode.getBlockSize()));
}

@Benchmark
public void throughput(Blackhole bh, ThroughputMetrics metrics) throws IOException {
decode.benchmark(bh);
metrics.recordOperation(BLOCK_SIZE, decode.getEncodedSize());
metrics.recordOperation(decode.getBlockSize(), decode.getEncodedSize());
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/

package org.elasticsearch.benchmark.index.codec.tsdb;

import org.elasticsearch.benchmark.index.codec.tsdb.internal.AbstractTSDBCodecBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.DecodeBenchmark;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.LowCardinalitySupplier;
import org.elasticsearch.benchmark.index.codec.tsdb.internal.ThroughputMetrics;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.infra.Blackhole;

import java.io.IOException;
import java.util.concurrent.TimeUnit;

/**
* Benchmark for decoding low cardinality data patterns.
*
* <p>Parameterized by number of distinct values and Zipf skew to test how
* the decoder handles data with limited value diversity. Higher skew means
* the most frequent value dominates more strongly.
*/
@Fork(value = 1)
@Warmup(iterations = 3)
@Measurement(iterations = 5)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.SECONDS)
@State(Scope.Benchmark)
public class DecodeLowCardinalityBenchmark {
private static final int SEED = 17;

@Param({ "5", "10" })
private int distinctValues;

@Param({ "1", "2", "3" })
private double skew;

private final AbstractTSDBCodecBenchmark decode;

public DecodeLowCardinalityBenchmark() {
this.decode = new DecodeBenchmark();
}

@Setup(Level.Invocation)
public void setupInvocation() throws IOException {
decode.setupInvocation();
}

@Setup(Level.Trial)
public void setupTrial() throws IOException {
decode.setupTrial(
LowCardinalitySupplier.builder(SEED, decode.getBlockSize()).withDistinctValues(distinctValues).withSkew(skew).build()
);
}

@Benchmark
public void throughput(Blackhole bh, ThroughputMetrics metrics) throws IOException {
decode.benchmark(bh);
metrics.recordOperation(decode.getBlockSize(), decode.getEncodedSize());
}
}
Loading