Skip to content

Commit 78ce5ea

Browse files
author
elasticsearchmachine
committed
Merge remote-tracking branch 'origin/main' into lucene_snapshot
2 parents 7423500 + 80946ce commit 78ce5ea

File tree

40 files changed

+2322
-142
lines changed

40 files changed

+2322
-142
lines changed

BUILDING.md

Lines changed: 11 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ In case of updating a dependency, ensure to remove the unused entry of the outda
8282

8383
You can also automate the generation of this entry by running your build using the `--write-verification-metadata` commandline option:
8484
```
85-
>./gradlew --write-verification-metadata sha256 precommit
85+
./gradlew --write-verification-metadata sha256 precommit
8686
```
8787

8888
The `--write-verification-metadata` Gradle option is generally able to resolve reachable configurations,
@@ -92,10 +92,10 @@ uses the changed dependencies. In most cases, `precommit` or `check` are good ca
9292
We prefer sha256 checksums as md5 and sha1 are not considered safe anymore these days. The generated entry
9393
will have the `origin` attribute been set to `Generated by Gradle`.
9494

95-
>A manual confirmation of the Gradle generated checksums is currently not mandatory.
96-
>If you want to add a level of verification you can manually confirm the checksum (e.g. by looking it up on the website of the library)
97-
>Please replace the content of the `origin` attribute by `official site` in that case.
98-
>
95+
> [!Tip]
96+
> A manual confirmation of the Gradle generated checksums is currently not mandatory.
97+
> If you want to add a level of verification you can manually confirm the checksum (e.g. by looking it up on the website of the library)
98+
> Please replace the content of the `origin` attribute by `official site` in that case.
9999
100100
#### Custom plugin and task implementations
101101

@@ -229,13 +229,9 @@ In addition to snapshot builds JitPack supports building Pull Requests. Simply u
229229
3. Run the Gradle build as needed. Keep in mind the initial resolution might take a bit longer as this needs to be built
230230
by JitPack in the background before we can resolve the adhoc built dependency.
231231

232-
---
233-
234-
**NOTE**
235-
236-
You should only use that approach locally or on a developer branch for production dependencies as we do
232+
> [!Note]
233+
> You should only use that approach locally or on a developer branch for production dependencies as we do
237234
not want to ship unreleased libraries into our releases.
238-
---
239235

240236
#### How to use a custom third party artifact?
241237

@@ -265,12 +261,9 @@ allprojects {
265261
```
266262
4. Run the Gradle build as needed with `--write-verification-metadata` to ensure the Gradle dependency verification does not fail on your custom dependency.
267263

268-
---
269-
**NOTE**
270-
271-
As Gradle prefers to use modules whose descriptor has been created from real meta-data rather than being generated,
264+
> [!Note]
265+
> As Gradle prefers to use modules whose descriptor has been created from real meta-data rather than being generated,
272266
flat directory repositories cannot be used to override artifacts with real meta-data from other repositories declared in the build.
273-
For example, if Gradle finds only `jmxri-1.2.1.jar` in a flat directory repository, but `jmxri-1.2.1.pom` in another repository
267+
> For example, if Gradle finds only `jmxri-1.2.1.jar` in a flat directory repository, but `jmxri-1.2.1.pom` in another repository
274268
that supports meta-data, it will use the second repository to provide the module.
275-
Therefore, it is recommended to declare a version that is not resolvable from public repositories we use (e.g. Maven Central)
276-
---
269+
> Therefore, it is recommended to declare a version that is not resolvable from public repositories we use (e.g. Maven Central)

CONTRIBUTING.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -401,7 +401,8 @@ It is important that the only code covered by the Elastic licence is contained
401401
within the top-level `x-pack` directory. The build will fail its pre-commit
402402
checks if contributed code does not have the appropriate license headers.
403403

404-
> **NOTE:** If you have imported the project into IntelliJ IDEA the project will
404+
> [!NOTE]
405+
> If you have imported the project into IntelliJ IDEA the project will
405406
> be automatically configured to add the correct license header to new source
406407
> files based on the source location.
407408

modules/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3Service.java

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -254,7 +254,21 @@ protected S3ClientBuilder buildClientBuilder(S3ClientSettings clientSettings, Sd
254254
}
255255

256256
if (Strings.hasLength(clientSettings.endpoint)) {
257-
s3clientBuilder.endpointOverride(URI.create(clientSettings.endpoint));
257+
String endpoint = clientSettings.endpoint;
258+
if ((endpoint.startsWith("http://") || endpoint.startsWith("https://")) == false) {
259+
// The SDK does not know how to interpret endpoints without a scheme prefix and will error. Therefore, when the scheme is
260+
// absent, we'll supply HTTPS as a default to avoid errors.
261+
// See https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/client-configuration.html#client-config-other-diffs
262+
endpoint = "https://" + endpoint;
263+
LOGGER.warn(
264+
"""
265+
found S3 client with endpoint [{}] that is missing a scheme, guessing it should use 'https://'; \
266+
to suppress this warning, add a scheme prefix to the [{}] setting on this node""",
267+
clientSettings.endpoint,
268+
S3ClientSettings.ENDPOINT_SETTING.getConcreteSettingForNamespace("CLIENT_NAME").getKey()
269+
);
270+
}
271+
s3clientBuilder.endpointOverride(URI.create(endpoint));
258272
}
259273

260274
return s3clientBuilder;

modules/repository-s3/src/test/java/org/elasticsearch/repositories/s3/S3ServiceTests.java

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,9 @@
1212
import software.amazon.awssdk.awscore.exception.AwsServiceException;
1313
import software.amazon.awssdk.core.retry.RetryPolicyContext;
1414
import software.amazon.awssdk.core.retry.conditions.RetryCondition;
15+
import software.amazon.awssdk.http.SdkHttpClient;
1516
import software.amazon.awssdk.regions.Region;
17+
import software.amazon.awssdk.services.s3.S3Client;
1618
import software.amazon.awssdk.services.s3.endpoints.S3EndpointParams;
1719
import software.amazon.awssdk.services.s3.endpoints.internal.DefaultS3EndpointProvider;
1820
import software.amazon.awssdk.services.s3.model.S3Exception;
@@ -29,8 +31,10 @@
2931
import org.elasticsearch.watcher.ResourceWatcherService;
3032

3133
import java.io.IOException;
34+
import java.net.URI;
3235
import java.util.concurrent.atomic.AtomicBoolean;
3336

37+
import static org.hamcrest.Matchers.equalTo;
3438
import static org.mockito.Mockito.mock;
3539

3640
public class S3ServiceTests extends ESTestCase {
@@ -217,4 +221,22 @@ public void testGetClientRegionFallbackToUsEast1() {
217221
);
218222
}
219223
}
224+
225+
public void testEndpointOverrideSchemeDefaultsToHttpsWhenNotSpecified() {
226+
final S3Service s3Service = new S3Service(
227+
mock(Environment.class),
228+
Settings.EMPTY,
229+
mock(ResourceWatcherService.class),
230+
() -> Region.of("es-test-region")
231+
);
232+
final String endpointWithoutScheme = randomIdentifier() + ".ignore";
233+
S3Client s3Client = s3Service.buildClient(
234+
S3ClientSettings.getClientSettings(
235+
Settings.builder().put("s3.client.test-client.endpoint", endpointWithoutScheme).build(),
236+
"test-client"
237+
),
238+
mock(SdkHttpClient.class)
239+
);
240+
assertThat(s3Client.serviceClientConfiguration().endpointOverride().get(), equalTo(URI.create("https://" + endpointWithoutScheme)));
241+
}
220242
}

muted-tests.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -450,6 +450,15 @@ tests:
450450
- class: org.elasticsearch.xpack.ccr.action.ShardFollowTaskReplicationTests
451451
method: testChangeFollowerHistoryUUID
452452
issue: https://github.com/elastic/elasticsearch/issues/127680
453+
- class: org.elasticsearch.action.admin.indices.diskusage.IndexDiskUsageAnalyzerTests
454+
method: testKnnVectors
455+
issue: https://github.com/elastic/elasticsearch/issues/127689
456+
- class: org.elasticsearch.snapshots.SnapshotShutdownIT
457+
method: testSnapshotShutdownProgressTracker
458+
issue: https://github.com/elastic/elasticsearch/issues/127690
459+
- class: org.elasticsearch.xpack.core.template.IndexTemplateRegistryRolloverIT
460+
method: testRollover
461+
issue: https://github.com/elastic/elasticsearch/issues/127692
453462

454463
# Examples:
455464
#

rest-api-spec/README.markdown

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,8 +58,8 @@ The specification contains:
5858
* Request parameters
5959
* Request body specification
6060

61-
**NOTE**
62-
If an API is stable but it response should be treated as an arbitrary map of key values please notate this as followed
61+
> [!Note]
62+
> If an API is stable but it response should be treated as an arbitrary map of key values please notate this as followed
6363
6464
```json
6565
{
Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,131 @@
1+
/*
2+
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
3+
* or more contributor license agreements. Licensed under the "Elastic License
4+
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
5+
* Public License v 1"; you may not use this file except in compliance with, at
6+
* your election, the "Elastic License 2.0", the "GNU Affero General Public
7+
* License v3.0 only", or the "Server Side Public License, v 1".
8+
*/
9+
10+
package org.elasticsearch.common.metrics;
11+
12+
import java.util.Arrays;
13+
import java.util.concurrent.atomic.LongAdder;
14+
15+
/**
16+
* A histogram with a fixed number of buckets of exponentially increasing width.
17+
* <p>
18+
* The bucket boundaries are defined by increasing powers of two, e.g.
19+
* <code>
20+
* (-&infin;, 1), [1, 2), [2, 4), [4, 8), ..., [2^({@link #bucketCount}-2), &infin;)
21+
* </code>
22+
* There are {@link #bucketCount} buckets.
23+
*/
24+
public class ExponentialBucketHistogram {
25+
26+
private final int bucketCount;
27+
private final long lastBucketLowerBound;
28+
29+
public static int[] getBucketUpperBounds(int bucketCount) {
30+
int[] bounds = new int[bucketCount - 1];
31+
for (int i = 0; i < bounds.length; i++) {
32+
bounds[i] = 1 << i;
33+
}
34+
return bounds;
35+
}
36+
37+
private int getBucket(long observedValue) {
38+
if (observedValue <= 0) {
39+
return 0;
40+
} else if (lastBucketLowerBound <= observedValue) {
41+
return bucketCount - 1;
42+
} else {
43+
return Long.SIZE - Long.numberOfLeadingZeros(observedValue);
44+
}
45+
}
46+
47+
private final LongAdder[] buckets;
48+
49+
public ExponentialBucketHistogram(int bucketCount) {
50+
if (bucketCount < 2 || bucketCount > Integer.SIZE) {
51+
throw new IllegalArgumentException("Bucket count must be in [2, " + Integer.SIZE + "], got " + bucketCount);
52+
}
53+
this.bucketCount = bucketCount;
54+
this.lastBucketLowerBound = getBucketUpperBounds(bucketCount)[bucketCount - 2];
55+
buckets = new LongAdder[bucketCount];
56+
for (int i = 0; i < bucketCount; i++) {
57+
buckets[i] = new LongAdder();
58+
}
59+
}
60+
61+
public int[] calculateBucketUpperBounds() {
62+
return getBucketUpperBounds(bucketCount);
63+
}
64+
65+
public void addObservation(long observedValue) {
66+
buckets[getBucket(observedValue)].increment();
67+
}
68+
69+
/**
70+
* @return An array of frequencies of handling times in buckets with upper bounds as returned by {@link #calculateBucketUpperBounds()},
71+
* plus an extra bucket for handling times longer than the longest upper bound.
72+
*/
73+
public long[] getSnapshot() {
74+
final long[] histogram = new long[bucketCount];
75+
for (int i = 0; i < bucketCount; i++) {
76+
histogram[i] = buckets[i].longValue();
77+
}
78+
return histogram;
79+
}
80+
81+
/**
82+
* Calculate the Nth percentile value
83+
*
84+
* @param percentile The percentile as a fraction (in [0, 1.0])
85+
* @return A value greater than the specified fraction of values in the histogram
86+
* @throws IllegalArgumentException if the requested percentile is invalid
87+
*/
88+
public long getPercentile(float percentile) {
89+
return getPercentile(percentile, getSnapshot(), calculateBucketUpperBounds());
90+
}
91+
92+
/**
93+
* Calculate the Nth percentile value
94+
*
95+
* @param percentile The percentile as a fraction (in [0, 1.0])
96+
* @param snapshot An array of frequencies of handling times in buckets with upper bounds as per {@link #calculateBucketUpperBounds()}
97+
* @param bucketUpperBounds The upper bounds of the buckets in the histogram, as per {@link #calculateBucketUpperBounds()}
98+
* @return A value greater than the specified fraction of values in the histogram
99+
* @throws IllegalArgumentException if the requested percentile is invalid
100+
*/
101+
public long getPercentile(float percentile, long[] snapshot, int[] bucketUpperBounds) {
102+
assert snapshot.length == bucketCount && bucketUpperBounds.length == bucketCount - 1;
103+
if (percentile < 0 || percentile > 1) {
104+
throw new IllegalArgumentException("Requested percentile must be in [0, 1.0], percentile=" + percentile);
105+
}
106+
final long totalCount = Arrays.stream(snapshot).sum();
107+
long percentileIndex = (long) Math.ceil(totalCount * percentile);
108+
// Find which bucket has the Nth percentile value and return the upper bound value.
109+
for (int i = 0; i < bucketCount; i++) {
110+
percentileIndex -= snapshot[i];
111+
if (percentileIndex <= 0) {
112+
if (i == snapshot.length - 1) {
113+
return Long.MAX_VALUE;
114+
} else {
115+
return bucketUpperBounds[i];
116+
}
117+
}
118+
}
119+
assert false : "We shouldn't ever get here";
120+
return Long.MAX_VALUE;
121+
}
122+
123+
/**
124+
* Clear all values in the histogram (non-atomic)
125+
*/
126+
public void clear() {
127+
for (int i = 0; i < bucketCount; i++) {
128+
buckets[i].reset();
129+
}
130+
}
131+
}

server/src/main/java/org/elasticsearch/common/network/HandlingTimeTracker.java

Lines changed: 6 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -9,58 +9,20 @@
99

1010
package org.elasticsearch.common.network;
1111

12-
import java.util.concurrent.atomic.LongAdder;
12+
import org.elasticsearch.common.metrics.ExponentialBucketHistogram;
1313

1414
/**
1515
* Tracks how long message handling takes on a transport thread as a histogram with fixed buckets.
1616
*/
17-
public class HandlingTimeTracker {
17+
public class HandlingTimeTracker extends ExponentialBucketHistogram {
1818

19-
public static int[] getBucketUpperBounds() {
20-
int[] bounds = new int[17];
21-
for (int i = 0; i < bounds.length; i++) {
22-
bounds[i] = 1 << i;
23-
}
24-
return bounds;
25-
}
19+
public static final int BUCKET_COUNT = 18;
2620

27-
private static int getBucket(long handlingTimeMillis) {
28-
if (handlingTimeMillis <= 0) {
29-
return 0;
30-
} else if (LAST_BUCKET_LOWER_BOUND <= handlingTimeMillis) {
31-
return BUCKET_COUNT - 1;
32-
} else {
33-
return Long.SIZE - Long.numberOfLeadingZeros(handlingTimeMillis);
34-
}
21+
public static int[] getBucketUpperBounds() {
22+
return ExponentialBucketHistogram.getBucketUpperBounds(BUCKET_COUNT);
3523
}
3624

37-
public static final int BUCKET_COUNT = getBucketUpperBounds().length + 1;
38-
39-
private static final long LAST_BUCKET_LOWER_BOUND = getBucketUpperBounds()[BUCKET_COUNT - 2];
40-
41-
private final LongAdder[] buckets;
42-
4325
public HandlingTimeTracker() {
44-
buckets = new LongAdder[BUCKET_COUNT];
45-
for (int i = 0; i < BUCKET_COUNT; i++) {
46-
buckets[i] = new LongAdder();
47-
}
26+
super(BUCKET_COUNT);
4827
}
49-
50-
public void addHandlingTime(long handlingTimeMillis) {
51-
buckets[getBucket(handlingTimeMillis)].increment();
52-
}
53-
54-
/**
55-
* @return An array of frequencies of handling times in buckets with upper bounds as returned by {@link #getBucketUpperBounds()}, plus
56-
* an extra bucket for handling times longer than the longest upper bound.
57-
*/
58-
public long[] getHistogram() {
59-
final long[] histogram = new long[BUCKET_COUNT];
60-
for (int i = 0; i < BUCKET_COUNT; i++) {
61-
histogram[i] = buckets[i].longValue();
62-
}
63-
return histogram;
64-
}
65-
6628
}

server/src/main/java/org/elasticsearch/http/AbstractHttpServerTransport.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -460,7 +460,7 @@ public void incomingRequest(final HttpRequest httpRequest, final HttpChannel htt
460460
handleIncomingRequest(httpRequest, trackingChannel, httpRequest.getInboundException());
461461
} finally {
462462
final long took = threadPool.rawRelativeTimeInMillis() - startTime;
463-
networkService.getHandlingTimeTracker().addHandlingTime(took);
463+
networkService.getHandlingTimeTracker().addObservation(took);
464464
final long logThreshold = slowLogThresholdMs;
465465
if (logThreshold > 0 && took > logThreshold) {
466466
logger.warn(

server/src/main/java/org/elasticsearch/http/HttpRouteStatsTracker.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ public void addResponseStats(long contentLength) {
9191
}
9292

9393
public void addResponseTime(long timeMillis) {
94-
responseTimeTracker.addHandlingTime(timeMillis);
94+
responseTimeTracker.addObservation(timeMillis);
9595
}
9696

9797
public HttpRouteStats getStats() {
@@ -102,7 +102,7 @@ public HttpRouteStats getStats() {
102102
responseStats.count().longValue(),
103103
responseStats.totalSize().longValue(),
104104
responseStats.getHistogram(),
105-
responseTimeTracker.getHistogram()
105+
responseTimeTracker.getSnapshot()
106106
);
107107
}
108108
}

0 commit comments

Comments
 (0)