Skip to content

Commit 32bdd3b

Browse files
authored
Add a cluster setting configuring TDigestExecutionHint (#96943)
* Initial import for TDigest forking. * Fix MedianTest. More work needed for TDigestPercentile*Tests and the TDigestTest (and the rest of the tests) in the tdigest lib to pass. * Fix Dist. * Fix AVLTreeDigest.quantile to match Dist for uniform centroids. * Update docs/changelog/96086.yaml * Fix `MergingDigest.quantile` to match `Dist` on uniform distribution. * Add merging to TDigestState.hashCode and .equals. Remove wrong asserts from tests and MergingDigest. * Fix style violations for tdigest library. * Fix typo. * Fix more style violations. * Fix more style violations. * Fix remaining style violations in tdigest library. * Update results in docs based on the forked tdigest. * Fix YAML tests in aggs module. * Fix YAML tests in x-pack/plugin. * Skip failing V7 compat tests in modules/aggregations. * Fix TDigest library unittests. Remove redundant serializing interfaces from the library. * Remove YAML test versions for older releases. These tests don't address compatibility issues in mixed cluster tests as the latter contain a mix of older and newer nodes, so the output depends on which node is picked as a data node since the forked TDigest library is not backwards compatible (produces slightly different results). * Fix test failures in docs and mixed cluster. * Reduce buffer sizes in MergingDigest to avoid oom. * Exclude more failing V7 compatibility tests. * Update results for JdbcCsvSpecIT tests. * Update results for JdbcDocCsvSpecIT tests. * Revert unrelated change. * More test fixes. * Use version skips instead of blacklisting in mixed cluster tests. * Switch TDigestState back to AVLTreeDigest. * Update docs and tests with AVLTreeDigest output. * Update flaky test. * Remove dead code, esp around tracking of incoming data. * Update docs/changelog/96086.yaml * Delete docs/changelog/96086.yaml * Remove explicit compression calls. This was added to prevent concurrency tests from failing, but it leads to reduces precision. Submit this to see if the concurrency tests are still failing. * Revert "Remove explicit compression calls." This reverts commit 5352c96. * Remove explicit compression calls to MedianAbsoluteDeviation input. * Add unittests for AVL and merging digest accuracy. * Fix spotless violations. * Delete redundant tests and benchmarks. * Fix spotless violation. * Use the old implementation of AVLTreeDigest. The latest library version is 50% slower and less accurate, as verified by ComparisonTests. * Update docs with latest percentile results. * Update docs with latest percentile results. * Remove repeated compression calls. * Update more percentile results. * Use approximate percentile values in integration tests. This helps with mixed cluster tests, where some of the tests where blocked. * Fix expected percentile value in test. * Revert in-place node updates in AVL tree. Update quantile calculations between centroids and min/max values to match v.3.2. * Add SortingDigest and HybridDigest. The SortingDigest tracks all samples in an ArrayList that gets sorted for quantile calculations. This approach provides perfectly accurate results and is the most efficient implementation for up to millions of samples, at the cost of bloated memory footprint. The HybridDigest uses a SortingDigest for small sample populations, then switches to a MergingDigest. This approach combines to the best performance and results for small sample counts with very good performance and acceptable accuracy for effectively unbounded sample counts. * Remove deps to the 3.2 library. * Remove unused licenses for tdigest. * Revert changes for SortingDigest and HybridDigest. These will be submitted in a follow-up PR for enabling MergingDigest. * Remove unused Histogram classes and unit tests. Delete dead and commented out code, make the remaining tests run reasonably fast. Remove unused annotations, esp. SuppressWarnings. * Remove Comparison class, not used. * Revert "Revert changes for SortingDigest and HybridDigest." This reverts commit 2336b11. * Use HybridDigest as default tdigest implementation Add SortingDigest as a simple structure for percentile calculations that tracks all data points in a sorted array. This is a fast and perfectly accurate solution that leads to bloated memory allocation. Add HybridDigest that uses SortingDigest for small sample counts, then switches to MergingDigest. This approach delivers extreme performance and accuracy for small populations while scaling indefinitely and maintaining acceptable performance and accuracy with constant memory allocation (15kB by default). Provide knobs to switch back to AVLTreeDigest, either per query or through ClusterSettings. * Small fixes. * Add javadoc and tests. * Add javadoc and tests. * Remove special logic for singletons in the boundaries. While this helps with the case where the digest contains only singletons (perfect accuracy), it has a major issue problem (non-monotonic quantile function) when the first singleton is followed by a non-singleton centroid. It's preferable to revert to the old version from 3.2; inaccuracies in a singleton-only digest should be mitigated by using a sorted array for small sample counts. * Revert changes to expected values in tests. This is due to restoring quantile functions to match head. * Revert changes to expected values in tests. This is due to restoring quantile functions to match head. * Tentatively restore percentile rank expected results. * Use cdf version from 3.2 Update Dist.cdf to use interpolation, use the same cdf version in AVLTreeDigest and MergingDigest. * Revert "Tentatively restore percentile rank expected results." This reverts commit 7718dbb. * Revert remaining changes compared to main. * Revert excluded V7 compat tests. * Exclude V7 compat tests still failing. * Exclude V7 compat tests still failing. * Remove ClusterSettings tentatively. * Initial import for TDigest forking. * Fix MedianTest. More work needed for TDigestPercentile*Tests and the TDigestTest (and the rest of the tests) in the tdigest lib to pass. * Fix Dist. * Fix AVLTreeDigest.quantile to match Dist for uniform centroids. * Update docs/changelog/96086.yaml * Fix `MergingDigest.quantile` to match `Dist` on uniform distribution. * Add merging to TDigestState.hashCode and .equals. Remove wrong asserts from tests and MergingDigest. * Fix style violations for tdigest library. * Fix typo. * Fix more style violations. * Fix more style violations. * Fix remaining style violations in tdigest library. * Update results in docs based on the forked tdigest. * Fix YAML tests in aggs module. * Fix YAML tests in x-pack/plugin. * Skip failing V7 compat tests in modules/aggregations. * Fix TDigest library unittests. Remove redundant serializing interfaces from the library. * Remove YAML test versions for older releases. These tests don't address compatibility issues in mixed cluster tests as the latter contain a mix of older and newer nodes, so the output depends on which node is picked as a data node since the forked TDigest library is not backwards compatible (produces slightly different results). * Fix test failures in docs and mixed cluster. * Reduce buffer sizes in MergingDigest to avoid oom. * Exclude more failing V7 compatibility tests. * Update results for JdbcCsvSpecIT tests. * Update results for JdbcDocCsvSpecIT tests. * Revert unrelated change. * More test fixes. * Use version skips instead of blacklisting in mixed cluster tests. * Switch TDigestState back to AVLTreeDigest. * Update docs and tests with AVLTreeDigest output. * Update flaky test. * Remove dead code, esp around tracking of incoming data. * Remove explicit compression calls. This was added to prevent concurrency tests from failing, but it leads to reduces precision. Submit this to see if the concurrency tests are still failing. * Update docs/changelog/96086.yaml * Delete docs/changelog/96086.yaml * Revert "Remove explicit compression calls." This reverts commit 5352c96. * Remove explicit compression calls to MedianAbsoluteDeviation input. * Add unittests for AVL and merging digest accuracy. * Fix spotless violations. * Delete redundant tests and benchmarks. * Fix spotless violation. * Use the old implementation of AVLTreeDigest. The latest library version is 50% slower and less accurate, as verified by ComparisonTests. * Update docs with latest percentile results. * Update docs with latest percentile results. * Remove repeated compression calls. * Update more percentile results. * Use approximate percentile values in integration tests. This helps with mixed cluster tests, where some of the tests where blocked. * Fix expected percentile value in test. * Revert in-place node updates in AVL tree. Update quantile calculations between centroids and min/max values to match v.3.2. * Add SortingDigest and HybridDigest. The SortingDigest tracks all samples in an ArrayList that gets sorted for quantile calculations. This approach provides perfectly accurate results and is the most efficient implementation for up to millions of samples, at the cost of bloated memory footprint. The HybridDigest uses a SortingDigest for small sample populations, then switches to a MergingDigest. This approach combines to the best performance and results for small sample counts with very good performance and acceptable accuracy for effectively unbounded sample counts. * Remove deps to the 3.2 library. * Remove unused licenses for tdigest. * Revert changes for SortingDigest and HybridDigest. These will be submitted in a follow-up PR for enabling MergingDigest. * Remove unused Histogram classes and unit tests. Delete dead and commented out code, make the remaining tests run reasonably fast. Remove unused annotations, esp. SuppressWarnings. * Remove Comparison class, not used. * Revert "Revert changes for SortingDigest and HybridDigest." This reverts commit 2336b11. * Use HybridDigest as default tdigest implementation Add SortingDigest as a simple structure for percentile calculations that tracks all data points in a sorted array. This is a fast and perfectly accurate solution that leads to bloated memory allocation. Add HybridDigest that uses SortingDigest for small sample counts, then switches to MergingDigest. This approach delivers extreme performance and accuracy for small populations while scaling indefinitely and maintaining acceptable performance and accuracy with constant memory allocation (15kB by default). Provide knobs to switch back to AVLTreeDigest, either per query or through ClusterSettings. * Add javadoc and tests. * Remove ClusterSettings tentatively. * Restore bySize function in TDigest and subclasses. * Update Dist.cdf to match the rest. Update tests. * Revert outdated test changes. * Revert outdated changes. * Small fixes. * Update docs/changelog/96794.yaml * Make HybridDigest the default implementation. * Update boxplot documentation. * Restore AVLTreeDigest as the default in TDigestState. TDigest.createHybridDigest nw returns the right type. The switch in TDigestState will happen in a separate PR as it requires many test updates. * Use execution_hint in tdigest spec. * Fix Dist.cdf for empty digest. * Pass ClusterSettings through SearchExecutionContext. * Bump up TransportVersion. * Bump up TransportVersion for real. * HybridDigest uses its final implementation during deserialization. * Restore the right TransportVersion in TDigestState.read * Add dummy SearchExecutionContext factory for tests. * Use TDigestExecutionHint instead of strings. * Remove check for null context. * Add link to TDigest javadoc. * Use NodeSettings directly. * Init executionHint to null, set before using. * Update docs/changelog/96943.yaml * Pass initialized executionHint to createEmptyPercentileRanksAggregator. * Initialize TDigestExecutionHint.SETTING to "DEFAULT". * Initialize TDigestExecutionHint to null. * Use readOptionalWriteable/writeOptionalWriteable. Move test-only SearchExecutionContext method in helper class under test. * Bump up TransportVersion. * Small fixes.
1 parent 4b5f89f commit 32bdd3b

File tree

36 files changed

+237
-262
lines changed

36 files changed

+237
-262
lines changed

benchmarks/src/main/java/org/elasticsearch/benchmark/search/QueryParserHelperBenchmark.java

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
import org.elasticsearch.common.compress.CompressedXContent;
2525
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
2626
import org.elasticsearch.common.lucene.Lucene;
27+
import org.elasticsearch.common.settings.ClusterSettings;
2728
import org.elasticsearch.common.settings.Settings;
2829
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
2930
import org.elasticsearch.core.IOUtils;
@@ -138,6 +139,7 @@ protected SearchExecutionContext buildSearchExecutionContext() {
138139
0,
139140
0,
140141
mapperService.getIndexSettings(),
142+
ClusterSettings.createBuiltInClusterSettings(),
141143
null,
142144
(ft, fdc) -> ft.fielddataBuilder(fdc).build(new IndexFieldDataCache.None(), new NoneCircuitBreakerService()),
143145
mapperService,

benchmarks/src/main/java/org/elasticsearch/benchmark/search/aggregations/AggConstructionContentionBenchmark.java

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -269,6 +269,11 @@ public IndexSettings getIndexSettings() {
269269
throw new UnsupportedOperationException();
270270
}
271271

272+
@Override
273+
public ClusterSettings getClusterSettings() {
274+
throw new UnsupportedOperationException();
275+
}
276+
272277
@Override
273278
public Optional<SortAndFormats> buildSort(List<SortBuilder<?>> sortBuilders) throws IOException {
274279
throw new UnsupportedOperationException();

docs/changelog/96943.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 96943
2+
summary: Add cluster setting to `SearchExecutionContext` to configure `TDigestExecutionHint`
3+
area: Aggregations
4+
type: enhancement
5+
issues: []

server/src/main/java/org/elasticsearch/TransportVersion.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -142,9 +142,10 @@ private static TransportVersion registerTransportVersion(int id, String uniqueId
142142
public static final TransportVersion V_8_500_015 = registerTransportVersion(8_500_015, "651216c9-d54f-4189-9fe1-48d82d276863");
143143
public static final TransportVersion V_8_500_016 = registerTransportVersion(8_500_016, "492C94FB-AAEA-4C9E-8375-BDB67A398584");
144144
public static final TransportVersion V_8_500_017 = registerTransportVersion(8_500_017, "0EDCB5BA-049C-443C-8AB1-5FA58FB996FB");
145+
public static final TransportVersion V_8_500_018 = registerTransportVersion(8_500_018, "827C32CE-33D9-4AC3-A773-8FB768F59EAF");
145146

146147
private static class CurrentHolder {
147-
private static final TransportVersion CURRENT = findCurrent(V_8_500_017);
148+
private static final TransportVersion CURRENT = findCurrent(V_8_500_018);
148149

149150
// finds the pluggable current version, or uses the given fallback
150151
private static TransportVersion findCurrent(TransportVersion fallback) {

server/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,7 @@
110110
import org.elasticsearch.search.SearchModule;
111111
import org.elasticsearch.search.SearchService;
112112
import org.elasticsearch.search.aggregations.MultiBucketConsumerService;
113+
import org.elasticsearch.search.aggregations.metrics.TDigestExecutionHint;
113114
import org.elasticsearch.search.fetch.subphase.highlight.FastVectorHighlighter;
114115
import org.elasticsearch.snapshots.InternalSnapshotsInfoService;
115116
import org.elasticsearch.snapshots.RestoreService;
@@ -576,7 +577,8 @@ public void apply(Settings value, Settings current, Settings previous) {
576577
IndicesClusterStateService.SHARD_LOCK_RETRY_INTERVAL_SETTING,
577578
IndicesClusterStateService.SHARD_LOCK_RETRY_TIMEOUT_SETTING,
578579
IngestSettings.GROK_WATCHDOG_INTERVAL,
579-
IngestSettings.GROK_WATCHDOG_MAX_EXECUTION_TIME
580+
IngestSettings.GROK_WATCHDOG_MAX_EXECUTION_TIME,
581+
TDigestExecutionHint.SETTING
580582
).filter(Objects::nonNull).collect(Collectors.toSet());
581583

582584
static List<SettingUpgrader<?>> BUILT_IN_SETTING_UPGRADERS = Collections.emptyList();

server/src/main/java/org/elasticsearch/index/IndexService.java

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -650,6 +650,7 @@ public SearchExecutionContext newSearchExecutionContext(
650650
shardId,
651651
shardRequestIndex,
652652
indexSettings,
653+
clusterService.getClusterSettings(),
653654
indexCache.bitsetFilterCache(),
654655
indexFieldData::getForField,
655656
mapperService(),

server/src/main/java/org/elasticsearch/index/query/SearchExecutionContext.java

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@
2525
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
2626
import org.elasticsearch.common.lucene.search.Queries;
2727
import org.elasticsearch.common.regex.Regex;
28+
import org.elasticsearch.common.settings.ClusterSettings;
2829
import org.elasticsearch.core.CheckedFunction;
2930
import org.elasticsearch.index.Index;
3031
import org.elasticsearch.index.IndexSettings;
@@ -89,7 +90,8 @@ public class SearchExecutionContext extends QueryRewriteContext {
8990
private final SimilarityService similarityService;
9091
private final BitsetFilterCache bitsetFilterCache;
9192
private final BiFunction<MappedFieldType, FieldDataContext, IndexFieldData<?>> indexFieldDataLookup;
92-
private SearchLookup lookup = null;
93+
private SearchLookup lookup;
94+
private ClusterSettings clusterSettings;
9395

9496
private final int shardId;
9597
private final int shardRequestIndex;
@@ -108,6 +110,7 @@ public SearchExecutionContext(
108110
int shardId,
109111
int shardRequestIndex,
110112
IndexSettings indexSettings,
113+
ClusterSettings clusterSettings,
111114
BitsetFilterCache bitsetFilterCache,
112115
BiFunction<MappedFieldType, FieldDataContext, IndexFieldData<?>> indexFieldDataLookup,
113116
MapperService mapperService,
@@ -129,6 +132,7 @@ public SearchExecutionContext(
129132
shardId,
130133
shardRequestIndex,
131134
indexSettings,
135+
clusterSettings,
132136
bitsetFilterCache,
133137
indexFieldDataLookup,
134138
mapperService,
@@ -157,6 +161,7 @@ public SearchExecutionContext(SearchExecutionContext source) {
157161
source.shardId,
158162
source.shardRequestIndex,
159163
source.indexSettings,
164+
source.clusterSettings,
160165
source.bitsetFilterCache,
161166
source.indexFieldDataLookup,
162167
source.mapperService,
@@ -181,6 +186,7 @@ private SearchExecutionContext(
181186
int shardId,
182187
int shardRequestIndex,
183188
IndexSettings indexSettings,
189+
ClusterSettings clusterSettings,
184190
BitsetFilterCache bitsetFilterCache,
185191
BiFunction<MappedFieldType, FieldDataContext, IndexFieldData<?>> indexFieldDataLookup,
186192
MapperService mapperService,
@@ -222,6 +228,7 @@ private SearchExecutionContext(
222228
this.indexFieldDataLookup = indexFieldDataLookup;
223229
this.nestedScope = new NestedScope();
224230
this.searcher = searcher;
231+
this.clusterSettings = clusterSettings;
225232
}
226233

227234
private void reset() {
@@ -598,6 +605,14 @@ public final SearchExecutionContext convertToSearchExecutionContext() {
598605
return this;
599606
}
600607

608+
/**
609+
* Returns the cluster settings for this context. This might return null if the
610+
* context has not cluster scope.
611+
*/
612+
public ClusterSettings getClusterSettings() {
613+
return clusterSettings;
614+
}
615+
601616
/** Return the current {@link IndexReader}, or {@code null} if no index reader is available,
602617
* for instance if this rewrite context is used to index queries (percolation).
603618
*/

server/src/main/java/org/elasticsearch/search/aggregations/metrics/MedianAbsoluteDeviationAggregationBuilder.java

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ public static void registerAggregators(ValuesSourceRegistry.Builder builder) {
5454
}
5555

5656
private double compression = 1000d;
57-
private TDigestExecutionHint executionHint = TDigestExecutionHint.DEFAULT;
57+
private TDigestExecutionHint executionHint = null;
5858

5959
public MedianAbsoluteDeviationAggregationBuilder(String name) {
6060
super(name);
@@ -63,9 +63,11 @@ public MedianAbsoluteDeviationAggregationBuilder(String name) {
6363
public MedianAbsoluteDeviationAggregationBuilder(StreamInput in) throws IOException {
6464
super(in);
6565
compression = in.readDouble();
66-
executionHint = in.getTransportVersion().onOrAfter(TransportVersion.V_8_500_014)
67-
? TDigestExecutionHint.readFrom(in)
68-
: TDigestExecutionHint.HIGH_ACCURACY;
66+
if (in.getTransportVersion().onOrAfter(TransportVersion.V_8_500_018)) {
67+
executionHint = in.readOptionalWriteable(TDigestExecutionHint::readFrom);
68+
} else {
69+
executionHint = TDigestExecutionHint.HIGH_ACCURACY;
70+
}
6971
}
7072

7173
protected MedianAbsoluteDeviationAggregationBuilder(
@@ -124,8 +126,8 @@ protected ValuesSourceType defaultValueSourceType() {
124126
@Override
125127
protected void innerWriteTo(StreamOutput out) throws IOException {
126128
out.writeDouble(compression);
127-
if (out.getTransportVersion().onOrAfter(TransportVersion.V_8_500_014)) {
128-
executionHint.writeTo(out);
129+
if (out.getTransportVersion().onOrAfter(TransportVersion.V_8_500_018)) {
130+
out.writeOptionalWriteable(executionHint);
129131
}
130132
}
131133

@@ -140,6 +142,9 @@ protected ValuesSourceAggregatorFactory innerBuild(
140142
MedianAbsoluteDeviationAggregatorSupplier aggregatorSupplier = context.getValuesSourceRegistry()
141143
.getAggregator(REGISTRY_KEY, config);
142144

145+
if (executionHint == null) {
146+
executionHint = TDigestExecutionHint.parse(context.getClusterSettings().get(TDigestExecutionHint.SETTING));
147+
}
143148
return new MedianAbsoluteDeviationAggregatorFactory(
144149
name,
145150
config,
@@ -156,7 +161,9 @@ protected ValuesSourceAggregatorFactory innerBuild(
156161
@Override
157162
protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {
158163
builder.field(COMPRESSION_FIELD.getPreferredName(), compression);
159-
builder.field(EXECUTION_HINT_FIELD.getPreferredName(), executionHint);
164+
if (executionHint != null) {
165+
builder.field(EXECUTION_HINT_FIELD.getPreferredName(), executionHint);
166+
}
160167
return builder;
161168
}
162169

@@ -171,7 +178,7 @@ public boolean equals(Object obj) {
171178
if (obj == null || getClass() != obj.getClass()) return false;
172179
if (super.equals(obj) == false) return false;
173180
MedianAbsoluteDeviationAggregationBuilder other = (MedianAbsoluteDeviationAggregationBuilder) obj;
174-
return Objects.equals(compression, other.compression) && executionHint.equals(other.executionHint);
181+
return Objects.equals(compression, other.compression) && Objects.equals(executionHint, other.executionHint);
175182
}
176183

177184
@Override

server/src/main/java/org/elasticsearch/search/aggregations/metrics/PercentilesConfig.java

Lines changed: 18 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -124,14 +124,14 @@ public static class TDigest extends PercentilesConfig {
124124
static final double DEFAULT_COMPRESSION = 100.0;
125125
private double compression;
126126

127-
private TDigestExecutionHint executionHint = TDigestExecutionHint.DEFAULT;
127+
private TDigestExecutionHint executionHint;
128128

129129
public TDigest() {
130130
this(DEFAULT_COMPRESSION);
131131
}
132132

133133
public TDigest(double compression) {
134-
this(compression, TDigestExecutionHint.DEFAULT);
134+
this(compression, null);
135135
}
136136

137137
public TDigest(double compression, TDigestExecutionHint executionHint) {
@@ -143,8 +143,8 @@ public TDigest(double compression, TDigestExecutionHint executionHint) {
143143
TDigest(StreamInput in) throws IOException {
144144
this(
145145
in.readDouble(),
146-
in.getTransportVersion().onOrAfter(TransportVersion.V_8_500_014)
147-
? TDigestExecutionHint.readFrom(in)
146+
in.getTransportVersion().onOrAfter(TransportVersion.V_8_500_018)
147+
? in.readOptionalWriteable(TDigestExecutionHint::readFrom)
148148
: TDigestExecutionHint.HIGH_ACCURACY
149149
);
150150
}
@@ -164,7 +164,10 @@ public void parseExecutionHint(String executionHint) {
164164
this.executionHint = TDigestExecutionHint.parse(executionHint);
165165
}
166166

167-
public TDigestExecutionHint getExecutionHint() {
167+
public TDigestExecutionHint getExecutionHint(AggregationContext context) {
168+
if (executionHint == null) {
169+
executionHint = TDigestExecutionHint.parse(context.getClusterSettings().get(TDigestExecutionHint.SETTING));
170+
}
168171
return executionHint;
169172
}
170173

@@ -186,7 +189,7 @@ public Aggregator createPercentilesAggregator(
186189
parent,
187190
values,
188191
compression,
189-
executionHint,
192+
getExecutionHint(context),
190193
keyed,
191194
formatter,
192195
metadata
@@ -222,7 +225,7 @@ Aggregator createPercentileRanksAggregator(
222225
parent,
223226
values,
224227
compression,
225-
executionHint,
228+
getExecutionHint(context),
226229
keyed,
227230
formatter,
228231
metadata
@@ -237,22 +240,26 @@ public InternalNumericMetricsAggregation.MultiValue createEmptyPercentileRanksAg
237240
DocValueFormat formatter,
238241
Map<String, Object> metadata
239242
) {
240-
return InternalTDigestPercentileRanks.empty(name, values, compression, executionHint, keyed, formatter, metadata);
243+
TDigestExecutionHint hint = executionHint == null ? TDigestExecutionHint.DEFAULT : executionHint;
244+
return InternalTDigestPercentileRanks.empty(name, values, compression, hint, keyed, formatter, metadata);
241245
}
242246

243247
@Override
244248
public void writeTo(StreamOutput out) throws IOException {
245249
super.writeTo(out);
246250
out.writeDouble(compression);
247-
if (out.getTransportVersion().onOrAfter(TransportVersion.V_8_500_014)) {
248-
executionHint.writeTo(out);
251+
if (out.getTransportVersion().onOrAfter(TransportVersion.V_8_500_018)) {
252+
out.writeOptionalWriteable(executionHint);
249253
}
250254
}
251255

252256
@Override
253257
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
254258
builder.startObject(getMethod().toString());
255259
builder.field(PercentilesMethod.COMPRESSION_FIELD.getPreferredName(), compression);
260+
if (executionHint != null) {
261+
builder.field(PercentilesMethod.EXECUTION_HINT_FIELD.getPreferredName(), executionHint);
262+
}
256263
builder.endObject();
257264
return builder;
258265
}
@@ -264,7 +271,7 @@ public boolean equals(Object obj) {
264271
if (super.equals(obj) == false) return false;
265272

266273
TDigest other = (TDigest) obj;
267-
return compression == other.getCompression() && executionHint.equals(other.getExecutionHint());
274+
return compression == other.getCompression() && Objects.equals(executionHint, other.executionHint);
268275
}
269276

270277
@Override

server/src/main/java/org/elasticsearch/search/aggregations/metrics/TDigestExecutionHint.java

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
import org.elasticsearch.common.io.stream.StreamInput;
1212
import org.elasticsearch.common.io.stream.StreamOutput;
1313
import org.elasticsearch.common.io.stream.Writeable;
14+
import org.elasticsearch.common.settings.Setting;
1415

1516
import java.io.IOException;
1617

@@ -21,6 +22,14 @@ public enum TDigestExecutionHint implements Writeable {
2122
DEFAULT(0), // Use a TDigest that is optimized for performance, with a small penalty in accuracy.
2223
HIGH_ACCURACY(1); // Use a TDigest that is optimize for accuracy, at the expense of performance.
2324

25+
public static final Setting<String> SETTING = Setting.simpleString(
26+
"search.aggs.tdigest_execution_hint",
27+
TDigestExecutionHint.DEFAULT.toString(),
28+
TDigestExecutionHint::parse,
29+
Setting.Property.NodeScope,
30+
Setting.Property.Dynamic
31+
);
32+
2433
TDigestExecutionHint(int id) {
2534
this.id = id;
2635
}

0 commit comments

Comments
 (0)