Skip to content

Commit bc53734

Browse files
Merge branch 'main' into inference-index-fix
2 parents e67e62d + 6f60880 commit bc53734

File tree

66 files changed

+4549
-220
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+4549
-220
lines changed

docs/changelog/114601.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
pr: 114601
2+
summary: Support semantic_text in object fields
3+
area: Vector Search
4+
type: bug
5+
issues:
6+
- 114401

docs/changelog/115031.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 115031
2+
summary: Bool query early termination should also consider `must_not` clauses
3+
area: Search
4+
type: enhancement
5+
issues: []

docs/reference/cluster/allocation-explain.asciidoc

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,7 @@ node.
159159
<5> The decider which led to the `no` decision for the node.
160160
<6> An explanation as to why the decider returned a `no` decision, with a helpful hint pointing to the setting that led to the decision. In this example, a newly created index has <<indices-get-settings,an index setting>> that requires that it only be allocated to a node named `nonexistent_node`, which does not exist, so the index is unable to allocate.
161161

162+
[[maximum-number-of-retries-exceeded]]
162163
====== Maximum number of retries exceeded
163164

164165
The following response contains an allocation explanation for an unassigned
@@ -195,17 +196,19 @@ primary shard that has reached the maximum number of allocation retry attempts.
195196
{
196197
"decider": "max_retry",
197198
"decision" : "NO",
198-
"explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
199+
"explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
199200
}
200201
]
201202
}
202203
]
203204
}
204205
----
205206
// NOTCONSOLE
206-
207-
If decider message indicates a transient allocation issue, use
208-
the <<cluster-reroute,cluster reroute>> API to retry allocation.
207+
When Elasticsearch is unable to allocate a shard, it will attempt to retry allocation up to
208+
the maximum number of retries allowed. After this, Elasticsearch will stop attempting to
209+
allocate the shard in order to prevent infinite retries which may impact cluster
210+
performance. Run the <<cluster-reroute,cluster reroute>> API to retry allocation, which
211+
will allocate the shard if the issue preventing allocation has been resolved.
209212

210213
[[no-valid-shard-copy]]
211214
====== No valid shard copy

docs/reference/snapshot-restore/register-repository.asciidoc

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -248,10 +248,11 @@ that you have an archive copy of its contents that you can use to recreate the
248248
repository in its current state at a later date.
249249

250250
You must ensure that {es} does not write to the repository while you are taking
251-
the backup of its contents. You can do this by unregistering it, or registering
252-
it with `readonly: true`, on all your clusters. If {es} writes any data to the
253-
repository during the backup then the contents of the backup may not be
254-
consistent and it may not be possible to recover any data from it in future.
251+
the backup of its contents. If {es} writes any data to the repository during
252+
the backup then the contents of the backup may not be consistent and it may not
253+
be possible to recover any data from it in future. Prevent writes to the
254+
repository by unregistering the repository from the cluster which has write
255+
access to it.
255256

256257
Alternatively, if your repository supports it, you may take an atomic snapshot
257258
of the underlying filesystem and then take a backup of this filesystem

modules/dot-prefix-validation/src/main/java/org/elasticsearch/validation/DotPrefixValidator.java

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -56,14 +56,19 @@ public abstract class DotPrefixValidator<RequestType> implements MappedActionFil
5656
*
5757
* .elastic-connectors-* is used by enterprise search
5858
* .ml-* is used by ML
59+
* .slo-observability-* is used by Observability
5960
*/
6061
private static Set<String> IGNORED_INDEX_NAMES = Set.of(
6162
".elastic-connectors-v1",
6263
".elastic-connectors-sync-jobs-v1",
6364
".ml-state",
6465
".ml-anomalies-unrelated"
6566
);
66-
private static Set<Pattern> IGNORED_INDEX_PATTERNS = Set.of(Pattern.compile("\\.ml-state-\\d+"));
67+
private static Set<Pattern> IGNORED_INDEX_PATTERNS = Set.of(
68+
Pattern.compile("\\.ml-state-\\d+"),
69+
Pattern.compile("\\.slo-observability\\.sli-v\\d+.*"),
70+
Pattern.compile("\\.slo-observability\\.summary-v\\d+.*")
71+
);
6772

6873
DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(DotPrefixValidator.class);
6974

@@ -99,10 +104,11 @@ void validateIndices(@Nullable Set<String> indices) {
99104
if (Strings.hasLength(index)) {
100105
char c = getFirstChar(index);
101106
if (c == '.') {
102-
if (IGNORED_INDEX_NAMES.contains(index)) {
107+
final String strippedName = stripDateMath(index);
108+
if (IGNORED_INDEX_NAMES.contains(strippedName)) {
103109
return;
104110
}
105-
if (IGNORED_INDEX_PATTERNS.stream().anyMatch(p -> p.matcher(index).matches())) {
111+
if (IGNORED_INDEX_PATTERNS.stream().anyMatch(p -> p.matcher(strippedName).matches())) {
106112
return;
107113
}
108114
deprecationLogger.warn(
@@ -132,7 +138,18 @@ private static char getFirstChar(String index) {
132138
return c;
133139
}
134140

135-
private boolean isInternalRequest() {
141+
private static String stripDateMath(String index) {
142+
char c = index.charAt(0);
143+
if (c == '<') {
144+
assert index.charAt(index.length() - 1) == '>'
145+
: "expected index name with date math to start with < and end with >, how did this pass request validation? " + index;
146+
return index.substring(1, index.length() - 1);
147+
} else {
148+
return index;
149+
}
150+
}
151+
152+
boolean isInternalRequest() {
136153
final String actionOrigin = threadContext.getTransient(ThreadContext.ACTION_ORIGIN_TRANSIENT_NAME);
137154
final boolean isSystemContext = threadContext.isSystemContext();
138155
final boolean isInternalOrigin = Optional.ofNullable(actionOrigin).map(Strings::hasText).orElse(false);
Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
/*
2+
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
3+
* or more contributor license agreements. Licensed under the "Elastic License
4+
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
5+
* Public License v 1"; you may not use this file except in compliance with, at
6+
* your election, the "Elastic License 2.0", the "GNU Affero General Public
7+
* License v3.0 only", or the "Server Side Public License, v 1".
8+
*/
9+
10+
package org.elasticsearch.validation;
11+
12+
import org.elasticsearch.cluster.service.ClusterService;
13+
import org.elasticsearch.common.settings.ClusterSettings;
14+
import org.elasticsearch.common.settings.Setting;
15+
import org.elasticsearch.common.settings.Settings;
16+
import org.elasticsearch.common.util.concurrent.ThreadContext;
17+
import org.elasticsearch.common.util.set.Sets;
18+
import org.elasticsearch.test.ESTestCase;
19+
import org.elasticsearch.threadpool.ThreadPool;
20+
import org.junit.BeforeClass;
21+
22+
import java.util.HashSet;
23+
import java.util.Set;
24+
25+
import static org.mockito.Mockito.mock;
26+
import static org.mockito.Mockito.when;
27+
28+
public class DotPrefixValidatorTests extends ESTestCase {
29+
private final OperatorValidator<?> opV = new OperatorValidator<>();
30+
private final NonOperatorValidator<?> nonOpV = new NonOperatorValidator<>();
31+
private static final Set<Setting<?>> settings;
32+
33+
private static ClusterService clusterService;
34+
private static ClusterSettings clusterSettings;
35+
36+
static {
37+
Set<Setting<?>> cSettings = new HashSet<>(ClusterSettings.BUILT_IN_CLUSTER_SETTINGS);
38+
cSettings.add(DotPrefixValidator.VALIDATE_DOT_PREFIXES);
39+
settings = cSettings;
40+
}
41+
42+
@BeforeClass
43+
public static void beforeClass() {
44+
clusterService = mock(ClusterService.class);
45+
clusterSettings = new ClusterSettings(Settings.EMPTY, Sets.newHashSet(DotPrefixValidator.VALIDATE_DOT_PREFIXES));
46+
when(clusterService.getClusterSettings()).thenReturn(clusterSettings);
47+
when(clusterService.getSettings()).thenReturn(Settings.EMPTY);
48+
when(clusterService.threadPool()).thenReturn(mock(ThreadPool.class));
49+
}
50+
51+
public void testValidation() {
52+
53+
nonOpV.validateIndices(Set.of("regular"));
54+
opV.validateIndices(Set.of("regular"));
55+
assertFails(Set.of(".regular"));
56+
opV.validateIndices(Set.of(".regular"));
57+
assertFails(Set.of("first", ".second"));
58+
assertFails(Set.of("<.regular-{MM-yy-dd}>"));
59+
60+
// Test ignored names
61+
nonOpV.validateIndices(Set.of(".elastic-connectors-v1"));
62+
nonOpV.validateIndices(Set.of(".elastic-connectors-sync-jobs-v1"));
63+
nonOpV.validateIndices(Set.of(".ml-state"));
64+
nonOpV.validateIndices(Set.of(".ml-anomalies-unrelated"));
65+
66+
// Test ignored patterns
67+
nonOpV.validateIndices(Set.of(".ml-state-21309"));
68+
nonOpV.validateIndices(Set.of(">.ml-state-21309>"));
69+
nonOpV.validateIndices(Set.of(".slo-observability.sli-v2"));
70+
nonOpV.validateIndices(Set.of(".slo-observability.sli-v2.3"));
71+
nonOpV.validateIndices(Set.of(".slo-observability.sli-v2.3-2024-01-01"));
72+
nonOpV.validateIndices(Set.of("<.slo-observability.sli-v3.3.{2024-10-16||/M{yyyy-MM-dd|UTC}}>"));
73+
nonOpV.validateIndices(Set.of(".slo-observability.summary-v2"));
74+
nonOpV.validateIndices(Set.of(".slo-observability.summary-v2.3"));
75+
nonOpV.validateIndices(Set.of(".slo-observability.summary-v2.3-2024-01-01"));
76+
nonOpV.validateIndices(Set.of("<.slo-observability.summary-v3.3.{2024-10-16||/M{yyyy-MM-dd|UTC}}>"));
77+
}
78+
79+
private void assertFails(Set<String> indices) {
80+
nonOpV.validateIndices(indices);
81+
assertWarnings(
82+
"Index ["
83+
+ indices.stream().filter(i -> i.startsWith(".") || i.startsWith("<.")).toList().getFirst()
84+
+ "] name begins with a dot (.), which is deprecated, and will not be allowed in a future Elasticsearch version."
85+
);
86+
}
87+
88+
private class NonOperatorValidator<R> extends DotPrefixValidator<R> {
89+
90+
private NonOperatorValidator() {
91+
super(new ThreadContext(Settings.EMPTY), clusterService);
92+
}
93+
94+
@Override
95+
protected Set<String> getIndicesFromRequest(Object request) {
96+
return Set.of();
97+
}
98+
99+
@Override
100+
public String actionName() {
101+
return "";
102+
}
103+
104+
@Override
105+
boolean isInternalRequest() {
106+
return false;
107+
}
108+
}
109+
110+
private class OperatorValidator<R> extends NonOperatorValidator<R> {
111+
@Override
112+
boolean isInternalRequest() {
113+
return true;
114+
}
115+
}
116+
}

modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/extras/MatchOnlyTextFieldMapper.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,8 @@ public BlockLoader blockLoader(BlockLoaderContext blContext) {
364364
SourceValueFetcher fetcher = SourceValueFetcher.toString(blContext.sourcePaths(name()));
365365
// MatchOnlyText never has norms, so we have to use the field names field
366366
BlockSourceReader.LeafIteratorLookup lookup = BlockSourceReader.lookupFromFieldNames(blContext.fieldNames(), name());
367-
return new BlockSourceReader.BytesRefsBlockLoader(fetcher, lookup);
367+
var sourceMode = blContext.indexSettings().getIndexMappingSourceMode();
368+
return new BlockSourceReader.BytesRefsBlockLoader(fetcher, lookup, sourceMode);
368369
}
369370

370371
@Override

modules/mapper-extras/src/main/java/org/elasticsearch/index/mapper/extras/ScaledFloatFieldMapper.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -319,7 +319,8 @@ public BlockLoader blockLoader(BlockLoaderContext blContext) {
319319
BlockSourceReader.LeafIteratorLookup lookup = isStored() || isIndexed()
320320
? BlockSourceReader.lookupFromFieldNames(blContext.fieldNames(), name())
321321
: BlockSourceReader.lookupMatchingAll();
322-
return new BlockSourceReader.DoublesBlockLoader(valueFetcher, lookup);
322+
var sourceMode = blContext.indexSettings().getIndexMappingSourceMode();
323+
return new BlockSourceReader.DoublesBlockLoader(valueFetcher, lookup, sourceMode);
323324
}
324325

325326
@Override

0 commit comments

Comments
 (0)