Skip to content

Commit 733e5c8

Browse files
committed
Merge branch 'main' into debug_DataNodeRequestSenderIT
2 parents 90d297c + 711d37d commit 733e5c8

File tree

613 files changed

+20831
-7786
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

613 files changed

+20831
-7786
lines changed

.buildkite/scripts/dra-workflow.trigger.sh

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,34 @@
22

33
set -euo pipefail
44

5-
echo "steps:"
6-
75
source .buildkite/scripts/branches.sh
86

7+
# We use that filtering to keep different schedule for different branches
8+
if [ -n "${INCLUDED_BRANCHES:-}" ]; then
9+
# If set, only trigger the pipeline for the specified branches
10+
IFS=',' read -r -a BRANCHES <<< "${INCLUDED_BRANCHES}"
11+
elif [ -n "${EXCLUDED_BRANCHES:-}" ]; then
12+
# If set, listed branches will be excluded from the list of branches in branches.json
13+
IFS=',' read -r -a EXCLUDED_BRANCHES <<< "${EXCLUDED_BRANCHES}"
14+
FILTERED_BRANCHES=()
15+
for BRANCH in "${BRANCHES[@]}"; do
16+
EXCLUDE=false
17+
for EXCLUDED_BRANCH in "${EXCLUDED_BRANCHES[@]}"; do
18+
if [ "$BRANCH" == "$EXCLUDED_BRANCH" ]; then
19+
EXCLUDE=true
20+
break
21+
fi
22+
done
23+
if [ "$EXCLUDE" = false ]; then
24+
FILTERED_BRANCHES+=("$BRANCH")
25+
fi
26+
done
27+
BRANCHES=("${FILTERED_BRANCHES[@]}")
28+
fi
29+
30+
31+
echo "steps:"
32+
933
for BRANCH in "${BRANCHES[@]}"; do
1034
INTAKE_PIPELINE_SLUG="elasticsearch-intake"
1135
BUILD_JSON=$(curl -sH "Authorization: Bearer ${BUILDKITE_API_TOKEN}" "https://api.buildkite.com/v2/organizations/elastic/pipelines/${INTAKE_PIPELINE_SLUG}/builds?branch=${BRANCH}&state=passed&per_page=1" | jq '.[0] | {commit: .commit, url: .web_url}')

BUILDING.md

Lines changed: 12 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ In case of updating a dependency, ensure to remove the unused entry of the outda
8282

8383
You can also automate the generation of this entry by running your build using the `--write-verification-metadata` commandline option:
8484
```
85-
>./gradlew --write-verification-metadata sha256 precommit
85+
./gradlew --write-verification-metadata sha256 precommit
8686
```
8787

8888
The `--write-verification-metadata` Gradle option is generally able to resolve reachable configurations,
@@ -92,10 +92,10 @@ uses the changed dependencies. In most cases, `precommit` or `check` are good ca
9292
We prefer sha256 checksums as md5 and sha1 are not considered safe anymore these days. The generated entry
9393
will have the `origin` attribute been set to `Generated by Gradle`.
9494

95-
>A manual confirmation of the Gradle generated checksums is currently not mandatory.
96-
>If you want to add a level of verification you can manually confirm the checksum (e.g. by looking it up on the website of the library)
97-
>Please replace the content of the `origin` attribute by `official site` in that case.
98-
>
95+
> [!Tip]
96+
> A manual confirmation of the Gradle generated checksums is currently not mandatory.
97+
> If you want to add a level of verification you can manually confirm the checksum (e.g. by looking it up on the website of the library)
98+
> Please replace the content of the `origin` attribute by `official site` in that case.
9999
100100
#### Custom plugin and task implementations
101101

@@ -144,7 +144,7 @@ To wire this registered cluster into a `TestClusterAware` task (e.g. `RestIntegT
144144
Additional integration tests for a certain Elasticsearch modules that are specific to certain cluster configuration can be declared in a separate so called `qa` subproject of your module.
145145

146146
The benefit of a dedicated project for these tests are:
147-
- `qa` projects are dedicated two specific use-cases and easier to maintain
147+
- `qa` projects are dedicated to specific use-cases and easier to maintain
148148
- It keeps the specific test logic separated from the common test logic.
149149
- You can run those tests in parallel to other projects of the build.
150150

@@ -229,13 +229,9 @@ In addition to snapshot builds JitPack supports building Pull Requests. Simply u
229229
3. Run the Gradle build as needed. Keep in mind the initial resolution might take a bit longer as this needs to be built
230230
by JitPack in the background before we can resolve the adhoc built dependency.
231231

232-
---
233-
234-
**NOTE**
235-
236-
You should only use that approach locally or on a developer branch for production dependencies as we do
232+
> [!Note]
233+
> You should only use that approach locally or on a developer branch for production dependencies as we do
237234
not want to ship unreleased libraries into our releases.
238-
---
239235

240236
#### How to use a custom third party artifact?
241237

@@ -265,12 +261,9 @@ allprojects {
265261
```
266262
4. Run the Gradle build as needed with `--write-verification-metadata` to ensure the Gradle dependency verification does not fail on your custom dependency.
267263

268-
---
269-
**NOTE**
270-
271-
As Gradle prefers to use modules whose descriptor has been created from real meta-data rather than being generated,
264+
> [!Note]
265+
> As Gradle prefers to use modules whose descriptor has been created from real meta-data rather than being generated,
272266
flat directory repositories cannot be used to override artifacts with real meta-data from other repositories declared in the build.
273-
For example, if Gradle finds only `jmxri-1.2.1.jar` in a flat directory repository, but `jmxri-1.2.1.pom` in another repository
267+
> For example, if Gradle finds only `jmxri-1.2.1.jar` in a flat directory repository, but `jmxri-1.2.1.pom` in another repository
274268
that supports meta-data, it will use the second repository to provide the module.
275-
Therefore, it is recommended to declare a version that is not resolvable from public repositories we use (e.g. Maven Central)
276-
---
269+
> Therefore, it is recommended to declare a version that is not resolvable from public repositories we use (e.g. Maven Central)

CONTRIBUTING.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -401,7 +401,8 @@ It is important that the only code covered by the Elastic licence is contained
401401
within the top-level `x-pack` directory. The build will fail its pre-commit
402402
checks if contributed code does not have the appropriate license headers.
403403

404-
> **NOTE:** If you have imported the project into IntelliJ IDEA the project will
404+
> [!NOTE]
405+
> If you have imported the project into IntelliJ IDEA the project will
405406
> be automatically configured to add the correct license header to new source
406407
> files based on the source location.
407408

benchmarks/build.gradle

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ dependencies {
4242
api(project(':libs:h3'))
4343
api(project(':modules:aggregations'))
4444
api(project(':x-pack:plugin:esql-core'))
45+
api(project(':x-pack:plugin:core'))
4546
api(project(':x-pack:plugin:esql'))
4647
api(project(':x-pack:plugin:esql:compute'))
4748
implementation project(path: ':libs:simdvec')

benchmarks/src/main/java/org/elasticsearch/benchmark/compute/operator/ValuesSourceReaderBenchmark.java

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@
2525
import org.apache.lucene.util.NumericUtils;
2626
import org.elasticsearch.common.breaker.NoopCircuitBreaker;
2727
import org.elasticsearch.common.lucene.Lucene;
28+
import org.elasticsearch.common.settings.Settings;
2829
import org.elasticsearch.common.util.BigArrays;
2930
import org.elasticsearch.compute.data.BlockFactory;
3031
import org.elasticsearch.compute.data.BytesRefBlock;
@@ -50,6 +51,7 @@
5051
import org.elasticsearch.index.mapper.MappedFieldType;
5152
import org.elasticsearch.index.mapper.NumberFieldMapper;
5253
import org.elasticsearch.search.lookup.SearchLookup;
54+
import org.elasticsearch.xpack.esql.plugin.EsqlPlugin;
5355
import org.openjdk.jmh.annotations.Benchmark;
5456
import org.openjdk.jmh.annotations.BenchmarkMode;
5557
import org.openjdk.jmh.annotations.Fork;
@@ -335,7 +337,7 @@ public void benchmark() {
335337
fields(name),
336338
List.of(new ValuesSourceReaderOperator.ShardContext(reader, () -> {
337339
throw new UnsupportedOperationException("can't load _source here");
338-
})),
340+
}, EsqlPlugin.STORED_FIELDS_SEQUENTIAL_PROPORTION.getDefault(Settings.EMPTY))),
339341
0
340342
);
341343
long sum = 0;
Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
/*
2+
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
3+
* or more contributor license agreements. Licensed under the "Elastic License
4+
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
5+
* Public License v 1"; you may not use this file except in compliance with, at
6+
* your election, the "Elastic License 2.0", the "GNU Affero General Public
7+
* License v3.0 only", or the "Server Side Public License, v 1".
8+
*/
9+
10+
package org.elasticsearch.benchmark.esql;
11+
12+
import org.elasticsearch.common.logging.LogConfigurator;
13+
import org.elasticsearch.common.settings.Settings;
14+
import org.elasticsearch.index.IndexMode;
15+
import org.elasticsearch.license.XPackLicenseState;
16+
import org.elasticsearch.xpack.esql.analysis.Analyzer;
17+
import org.elasticsearch.xpack.esql.analysis.AnalyzerContext;
18+
import org.elasticsearch.xpack.esql.analysis.EnrichResolution;
19+
import org.elasticsearch.xpack.esql.analysis.Verifier;
20+
import org.elasticsearch.xpack.esql.core.expression.FoldContext;
21+
import org.elasticsearch.xpack.esql.core.type.EsField;
22+
import org.elasticsearch.xpack.esql.core.util.DateUtils;
23+
import org.elasticsearch.xpack.esql.expression.function.EsqlFunctionRegistry;
24+
import org.elasticsearch.xpack.esql.index.EsIndex;
25+
import org.elasticsearch.xpack.esql.index.IndexResolution;
26+
import org.elasticsearch.xpack.esql.inference.InferenceResolution;
27+
import org.elasticsearch.xpack.esql.optimizer.LogicalOptimizerContext;
28+
import org.elasticsearch.xpack.esql.optimizer.LogicalPlanOptimizer;
29+
import org.elasticsearch.xpack.esql.parser.EsqlParser;
30+
import org.elasticsearch.xpack.esql.parser.QueryParams;
31+
import org.elasticsearch.xpack.esql.plan.logical.LogicalPlan;
32+
import org.elasticsearch.xpack.esql.plugin.EsqlPlugin;
33+
import org.elasticsearch.xpack.esql.plugin.QueryPragmas;
34+
import org.elasticsearch.xpack.esql.session.Configuration;
35+
import org.elasticsearch.xpack.esql.telemetry.Metrics;
36+
import org.elasticsearch.xpack.esql.telemetry.PlanTelemetry;
37+
import org.openjdk.jmh.annotations.Benchmark;
38+
import org.openjdk.jmh.annotations.BenchmarkMode;
39+
import org.openjdk.jmh.annotations.Fork;
40+
import org.openjdk.jmh.annotations.Measurement;
41+
import org.openjdk.jmh.annotations.Mode;
42+
import org.openjdk.jmh.annotations.OutputTimeUnit;
43+
import org.openjdk.jmh.annotations.Scope;
44+
import org.openjdk.jmh.annotations.Setup;
45+
import org.openjdk.jmh.annotations.State;
46+
import org.openjdk.jmh.annotations.Warmup;
47+
import org.openjdk.jmh.infra.Blackhole;
48+
49+
import java.util.LinkedHashMap;
50+
import java.util.Locale;
51+
import java.util.Map;
52+
import java.util.concurrent.TimeUnit;
53+
54+
import static java.util.Collections.emptyMap;
55+
import static org.elasticsearch.xpack.esql.core.type.DataType.TEXT;
56+
57+
@Fork(1)
58+
@Warmup(iterations = 5)
59+
@Measurement(iterations = 10)
60+
@BenchmarkMode(Mode.AverageTime)
61+
@OutputTimeUnit(TimeUnit.MILLISECONDS)
62+
@State(Scope.Benchmark)
63+
public class QueryPlanningBenchmark {
64+
65+
static {
66+
LogConfigurator.configureESLogging();
67+
}
68+
69+
private PlanTelemetry telemetry;
70+
private EsqlParser parser;
71+
private Analyzer analyzer;
72+
private LogicalPlanOptimizer optimizer;
73+
74+
@Setup
75+
public void setup() {
76+
77+
var config = new Configuration(
78+
DateUtils.UTC,
79+
Locale.US,
80+
null,
81+
null,
82+
new QueryPragmas(Settings.EMPTY),
83+
EsqlPlugin.QUERY_RESULT_TRUNCATION_MAX_SIZE.getDefault(Settings.EMPTY),
84+
EsqlPlugin.QUERY_RESULT_TRUNCATION_DEFAULT_SIZE.getDefault(Settings.EMPTY),
85+
"",
86+
false,
87+
Map.of(),
88+
System.nanoTime(),
89+
false
90+
);
91+
92+
var fields = 10_000;
93+
var mapping = LinkedHashMap.<String, EsField>newLinkedHashMap(fields);
94+
for (int i = 0; i < fields; i++) {
95+
mapping.put("field" + i, new EsField("field-" + i, TEXT, emptyMap(), true));
96+
}
97+
98+
var esIndex = new EsIndex("test", mapping, Map.of("test", IndexMode.STANDARD));
99+
100+
var functionRegistry = new EsqlFunctionRegistry();
101+
102+
telemetry = new PlanTelemetry(functionRegistry);
103+
parser = new EsqlParser();
104+
analyzer = new Analyzer(
105+
new AnalyzerContext(
106+
config,
107+
functionRegistry,
108+
IndexResolution.valid(esIndex),
109+
Map.of(),
110+
new EnrichResolution(),
111+
InferenceResolution.EMPTY
112+
),
113+
new Verifier(new Metrics(functionRegistry), new XPackLicenseState(() -> 0L))
114+
);
115+
optimizer = new LogicalPlanOptimizer(new LogicalOptimizerContext(config, FoldContext.small()));
116+
}
117+
118+
private LogicalPlan plan(String query) {
119+
var parsed = parser.createStatement(query, new QueryParams(), telemetry);
120+
var analyzed = analyzer.analyze(parsed);
121+
var optimized = optimizer.optimize(analyzed);
122+
return optimized;
123+
}
124+
125+
@Benchmark
126+
public void run(Blackhole blackhole) {
127+
blackhole.consume(plan("FROM test | LIMIT 10"));
128+
}
129+
}

benchmarks/src/main/java/org/elasticsearch/benchmark/indices/resolution/IndexNameExpressionResolverBenchmark.java

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,8 @@
1616
import org.elasticsearch.cluster.metadata.DataStream;
1717
import org.elasticsearch.cluster.metadata.IndexMetadata;
1818
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
19-
import org.elasticsearch.cluster.metadata.Metadata;
19+
import org.elasticsearch.cluster.metadata.ProjectId;
20+
import org.elasticsearch.cluster.metadata.ProjectMetadata;
2021
import org.elasticsearch.cluster.project.DefaultProjectResolver;
2122
import org.elasticsearch.common.settings.Settings;
2223
import org.elasticsearch.common.util.concurrent.ThreadContext;
@@ -63,12 +64,12 @@ public void setUp() {
6364
int numDataStreams = toInt(params[0]);
6465
int numIndices = toInt(params[1]);
6566

66-
Metadata.Builder mb = Metadata.builder();
67+
ProjectMetadata.Builder pmb = ProjectMetadata.builder(ProjectId.DEFAULT);
6768
String[] indices = new String[numIndices + numDataStreams * (numIndices + 1)];
6869
int position = 0;
6970
for (int i = 1; i <= numIndices; i++) {
7071
String indexName = INDEX_PREFIX + i;
71-
createIndexMetadata(indexName, mb);
72+
createIndexMetadata(indexName, pmb);
7273
indices[position++] = indexName;
7374
}
7475

@@ -77,14 +78,14 @@ public void setUp() {
7778
List<Index> backingIndices = new ArrayList<>();
7879
for (int j = 1; j <= numIndices; j++) {
7980
String backingIndexName = DataStream.getDefaultBackingIndexName(dataStreamName, j);
80-
backingIndices.add(createIndexMetadata(backingIndexName, mb).getIndex());
81+
backingIndices.add(createIndexMetadata(backingIndexName, pmb).getIndex());
8182
indices[position++] = backingIndexName;
8283
}
8384
indices[position++] = dataStreamName;
84-
mb.put(DataStream.builder(dataStreamName, backingIndices).build());
85+
pmb.put(DataStream.builder(dataStreamName, backingIndices).build());
8586
}
8687
int mid = indices.length / 2;
87-
clusterState = ClusterState.builder(ClusterName.DEFAULT).metadata(mb).build();
88+
clusterState = ClusterState.builder(ClusterName.DEFAULT).putProjectMetadata(pmb).build();
8889
resolver = new IndexNameExpressionResolver(
8990
new ThreadContext(Settings.EMPTY),
9091
new SystemIndices(List.of()),
@@ -97,13 +98,13 @@ public void setUp() {
9798
mixedRequest = new Request(IndicesOptions.lenientExpandOpenHidden(), mixed);
9899
}
99100

100-
private IndexMetadata createIndexMetadata(String indexName, Metadata.Builder mb) {
101+
private IndexMetadata createIndexMetadata(String indexName, ProjectMetadata.Builder pmb) {
101102
IndexMetadata indexMetadata = IndexMetadata.builder(indexName)
102103
.settings(Settings.builder().put(IndexMetadata.SETTING_VERSION_CREATED, IndexVersion.current()))
103104
.numberOfShards(1)
104105
.numberOfReplicas(0)
105106
.build();
106-
mb.put(indexMetadata, false);
107+
pmb.put(indexMetadata, false);
107108
return indexMetadata;
108109
}
109110

benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/ShardsAvailabilityHealthIndicatorBenchmark.java

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,8 @@
1212
import org.elasticsearch.cluster.ClusterName;
1313
import org.elasticsearch.cluster.ClusterState;
1414
import org.elasticsearch.cluster.metadata.IndexMetadata;
15-
import org.elasticsearch.cluster.metadata.Metadata;
15+
import org.elasticsearch.cluster.metadata.ProjectId;
16+
import org.elasticsearch.cluster.metadata.ProjectMetadata;
1617
import org.elasticsearch.cluster.node.DiscoveryNodes;
1718
import org.elasticsearch.cluster.project.DefaultProjectResolver;
1819
import org.elasticsearch.cluster.routing.IndexRoutingTable;
@@ -96,7 +97,7 @@ public void setUp() throws Exception {
9697

9798
AllocationService allocationService = Allocators.createAllocationService(Settings.EMPTY);
9899

99-
Metadata.Builder mb = Metadata.builder();
100+
ProjectMetadata.Builder pmb = ProjectMetadata.builder(ProjectId.DEFAULT);
100101
RoutingTable.Builder rb = RoutingTable.builder();
101102

102103
DiscoveryNodes.Builder nb = DiscoveryNodes.builder();
@@ -160,12 +161,12 @@ public void setUp() throws Exception {
160161
}
161162

162163
routingTable.add(indexRountingTableBuilder);
163-
mb.put(indexMetadata, false);
164+
pmb.put(indexMetadata, false);
164165
}
165166

166167
ClusterState initialClusterState = ClusterState.builder(ClusterName.DEFAULT)
167-
.metadata(mb)
168-
.routingTable(routingTable)
168+
.putProjectMetadata(pmb)
169+
.putRoutingTable(pmb.getId(), routingTable.build())
169170
.nodes(nb)
170171
.build();
171172

0 commit comments

Comments
 (0)