Skip to content

Commit 59f84a9

Browse files
authored
Merge branch 'main' into eis-text-embedding-task-type
2 parents fc11815 + 876c456 commit 59f84a9

File tree

21 files changed

+397
-115
lines changed

21 files changed

+397
-115
lines changed

docs/changelog/128854.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
pr: 128854
2+
summary: Mark token pruning for sparse vector as GA
3+
area: Machine Learning
4+
type: feature
5+
issues: []
6+
highlight:
7+
title: Mark Token Pruning for Sparse Vector as GA
8+
body: |-
9+
Token pruning for sparse_vector queries has been live since 8.13 as tech preview.
10+
As of 8.19.0 and 9.1.0, this is now generally available.
11+
notable: true

docs/reference/query-languages/query-dsl/query-dsl-sparse-vector-query.md

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -62,28 +62,36 @@ GET _search
6262
`query_vector`
6363
: (Optional, dictionary) A dictionary of token-weight pairs representing the precomputed query vector to search. Searching using this query vector will bypass additional inference. Only one of `inference_id` and `query_vector` is allowed.
6464

65-
`prune`
66-
: (Optional, boolean) [preview] Whether to perform pruning, omitting the non-significant tokens from the query to improve query performance. If `prune` is true but the `pruning_config` is not specified, pruning will occur but default values will be used. Default: false.
65+
`prune` {applies_to}`stack: preview 9.0, ga 9.1`
66+
: (Optional, boolean) Whether to perform pruning, omitting the non-significant tokens from the query to improve query performance. If `prune` is true but the `pruning_config` is not specified, pruning will occur but default values will be used. Default: false.
6767

68-
`pruning_config`
69-
: (Optional, object) [preview] Optional pruning configuration. If enabled, this will omit non-significant tokens from the query in order to improve query performance. This is only used if `prune` is set to `true`. If `prune` is set to `true` but `pruning_config` is not specified, default values will be used.
68+
`pruning_config` {applies_to}`stack: preview 9.0, ga 9.1`
69+
: (Optional, object) Optional pruning configuration. If enabled, this will omit non-significant tokens from the query in order to improve query performance. This is only used if `prune` is set to `true`. If `prune` is set to `true` but `pruning_config` is not specified, default values will be used.
7070

7171
Parameters for `pruning_config` are:
7272

7373
`tokens_freq_ratio_threshold`
74-
: (Optional, integer) [preview] Tokens whose frequency is more than `tokens_freq_ratio_threshold` times the average frequency of all tokens in the specified field are considered outliers and pruned. This value must between 1 and 100. Default: `5`.
74+
: (Optional, integer) Tokens whose frequency is more than `tokens_freq_ratio_threshold` times the average frequency of all tokens in the specified field are considered outliers and pruned. This value must between 1 and 100. Default: `5`.
7575

7676
`tokens_weight_threshold`
77-
: (Optional, float) [preview] Tokens whose weight is less than `tokens_weight_threshold` are considered insignificant and pruned. This value must be between 0 and 1. Default: `0.4`.
77+
: (Optional, float) Tokens whose weight is less than `tokens_weight_threshold` are considered insignificant and pruned. This value must be between 0 and 1. Default: `0.4`.
7878

7979
`only_score_pruned_tokens`
80-
: (Optional, boolean) [preview] If `true` we only input pruned tokens into scoring, and discard non-pruned tokens. It is strongly recommended to set this to `false` for the main query, but this can be set to `true` for a rescore query to get more relevant results. Default: `false`.
80+
: (Optional, boolean) If `true` we only input pruned tokens into scoring, and discard non-pruned tokens. It is strongly recommended to set this to `false` for the main query, but this can be set to `true` for a rescore query to get more relevant results. Default: `false`.
8181

8282
::::{note}
8383
The default values for `tokens_freq_ratio_threshold` and `tokens_weight_threshold` were chosen based on tests using ELSERv2 that provided the most optimal results.
8484
::::
8585

86+
When token pruning is applied, non-significant tokens will be pruned from the query.
87+
Non-significant tokens can be defined as tokens that meet both of the following criteria:
88+
* The token appears much more frequently than most tokens, indicating that it is a very common word and may not benefit the overall search results much.
89+
* The weight/score is so low that the token is likely not very relevant to the original term
8690

91+
Both the token frequency threshold and weight threshold must show the token is non-significant in order for the token to be pruned.
92+
This ensures that:
93+
* The tokens that are kept are frequent enough and have significant scoring.
94+
* Very infrequent tokens that may not have as high of a score are removed.
8795

8896
## Example ELSER query [sparse-vector-query-example]
8997

@@ -198,7 +206,7 @@ GET my-index/_search
198206

199207
## Example ELSER query with pruning configuration and rescore [sparse-vector-query-with-pruning-config-and-rescore-example]
200208

201-
The following is an extension to the above example that adds a [preview] pruning configuration to the `sparse_vector` query. The pruning configuration identifies non-significant tokens to prune from the query in order to improve query performance.
209+
The following is an extension to the above example that adds a pruning configuration to the `sparse_vector` query. The pruning configuration identifies non-significant tokens to prune from the query in order to improve query performance.
202210

203211
Token pruning happens at the shard level. While this should result in the same tokens being labeled as insignificant across shards, this is not guaranteed based on the composition of each shard. Therefore, if you are running `sparse_vector` with a `pruning_config` on a multi-shard index, we strongly recommend adding a [Rescore filtered search results](/reference/elasticsearch/rest-apis/filter-search-results.md#rescore) function with the tokens that were originally pruned from the query. This will help mitigate any shard-level inconsistency with pruned tokens and provide better relevance overall.
204212

libs/entitlement/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ NotEntitledException: component [(server)], module [org.apache.lucene.misc], cla
3535

3636
### How to add an Elasticsearch module/plugin policy
3737

38-
A policy is defined in an `entitlements-policy.yaml` file within an Elasticsearch module/plugin under `src/main/plugin-metadata`. Policy files contain lists of entitlements that should be allowed, grouped by Java module name, which acts as the policy scope. For example, the `transport-netty4` Elasticsearch module's policy file contains an entitlement to accept `inbound_network` connections, limited to the `io.netty.transport` and `io.netty.common` Java modules.
38+
A policy is defined in an `entitlement-policy.yaml` file within an Elasticsearch module/plugin under `src/main/plugin-metadata`. Policy files contain lists of entitlements that should be allowed, grouped by Java module name, which acts as the policy scope. For example, the `transport-netty4` Elasticsearch module's policy file contains an entitlement to accept `inbound_network` connections, limited to the `io.netty.transport` and `io.netty.common` Java modules.
3939

4040
Elasticsearch modules/plugins that are not yet modularized (i.e. do not have `module-info.java`) will need to use single `ALL-UNNAMED` scope. For example, the `reindex` Elasticsearch module's policy file contains a single `ALL-UNNAMED` scope, with an entitlement to perform `outbound_network`; all code in `reindex` will be able to connect to the network. It is not possible to use the `ALL-UNNAMED` scope for modularized modules/plugins.
4141

modules/data-streams/src/main/java/org/elasticsearch/datastreams/action/TransportUpdateDataStreamSettingsAction.java

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ protected void masterOperation(
131131
new UpdateDataStreamSettingsAction.DataStreamSettingsResponse(
132132
dataStreamName,
133133
false,
134-
e.getMessage(),
134+
Strings.hasText(e.getMessage()) ? e.getMessage() : e.toString(),
135135
EMPTY,
136136
EMPTY,
137137
UpdateDataStreamSettingsAction.DataStreamSettingsResponse.IndicesSettingsResult.EMPTY
@@ -222,9 +222,12 @@ private void updateSettingsOnIndices(
222222
Map<String, Object> settingsToApply = new HashMap<>();
223223
List<String> appliedToDataStreamOnly = new ArrayList<>();
224224
List<String> appliedToDataStreamAndBackingIndices = new ArrayList<>();
225+
Settings effectiveSettings = dataStream.getEffectiveSettings(
226+
clusterService.state().projectState(projectResolver.getProjectId()).metadata()
227+
);
225228
for (String settingName : requestSettings.keySet()) {
226229
if (APPLY_TO_BACKING_INDICES.contains(settingName)) {
227-
settingsToApply.put(settingName, requestSettings.get(settingName));
230+
settingsToApply.put(settingName, effectiveSettings.get(settingName));
228231
appliedToDataStreamAndBackingIndices.add(settingName);
229232
} else if (APPLY_TO_DATA_STREAM_ONLY.contains(settingName)) {
230233
appliedToDataStreamOnly.add(settingName);
@@ -242,9 +245,7 @@ private void updateSettingsOnIndices(
242245
true,
243246
null,
244247
settingsFilter.filter(dataStream.getSettings()),
245-
settingsFilter.filter(
246-
dataStream.getEffectiveSettings(clusterService.state().projectState(projectResolver.getProjectId()).metadata())
247-
),
248+
settingsFilter.filter(effectiveSettings),
248249
new UpdateDataStreamSettingsAction.DataStreamSettingsResponse.IndicesSettingsResult(
249250
appliedToDataStreamOnly,
250251
appliedToDataStreamAndBackingIndices,

modules/data-streams/src/yamlRestTest/resources/rest-api-spec/test/data_stream/240_data_stream_settings.yml

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -299,3 +299,102 @@ setup:
299299
- match: { .$idx0name.settings.index.lifecycle.name: "my-policy" }
300300
- match: { .$idx1name.settings.index.number_of_shards: "1" }
301301
- match: { .$idx1name.settings.index.lifecycle.name: "my-policy" }
302+
303+
---
304+
"Test null out settings":
305+
- requires:
306+
cluster_features: [ "logs_stream" ]
307+
reason: requires setting 'logs_stream' to get or set data stream settings
308+
- do:
309+
allowed_warnings:
310+
- "index template [my-template] has index patterns [my-data-stream-*] matching patterns from existing older templates [global] with patterns (global => [*]); this template [my-template] will take precedence during new index creation"
311+
indices.put_index_template:
312+
name: my-template
313+
body:
314+
index_patterns: [ my-data-stream-* ]
315+
data_stream: { }
316+
template:
317+
settings:
318+
number_of_replicas: 0
319+
lifecycle.name: my-policy
320+
321+
- do:
322+
indices.create_data_stream:
323+
name: my-data-stream-1
324+
325+
- do:
326+
cluster.health:
327+
index: "my-data-stream-1"
328+
wait_for_status: green
329+
330+
331+
332+
- do:
333+
indices.get_data_stream:
334+
name: my-data-stream-1
335+
- match: { data_streams.0.name: my-data-stream-1 }
336+
- match: { data_streams.0.settings: {} }
337+
- match: { data_streams.0.effective_settings: null }
338+
339+
- do:
340+
indices.put_data_stream_settings:
341+
name: my-data-stream-1
342+
body:
343+
index:
344+
number_of_shards: 2
345+
lifecycle:
346+
name: my-new-policy
347+
prefer_ilm: true
348+
- match: { data_streams.0.name: my-data-stream-1 }
349+
- match: { data_streams.0.applied_to_data_stream: true }
350+
- match: { data_streams.0.index_settings_results.applied_to_data_stream_only: [index.number_of_shards]}
351+
- length: { data_streams.0.index_settings_results.applied_to_data_stream_and_backing_indices: 2 }
352+
- match: { data_streams.0.settings.index.number_of_shards: "2" }
353+
- match: { data_streams.0.settings.index.lifecycle.name: "my-new-policy" }
354+
- match: { data_streams.0.settings.index.lifecycle.prefer_ilm: "true" }
355+
- match: { data_streams.0.effective_settings.index.number_of_shards: "2" }
356+
- match: { data_streams.0.effective_settings.index.number_of_replicas: "0" }
357+
- match: { data_streams.0.effective_settings.index.lifecycle.name: "my-new-policy" }
358+
- match: { data_streams.0.effective_settings.index.lifecycle.prefer_ilm: "true" }
359+
360+
- do:
361+
indices.put_data_stream_settings:
362+
name: my-data-stream-1
363+
body:
364+
index:
365+
number_of_shards: null
366+
lifecycle:
367+
name: null
368+
- match: { data_streams.0.name: my-data-stream-1 }
369+
- match: { data_streams.0.applied_to_data_stream: true }
370+
- match: { data_streams.0.index_settings_results.applied_to_data_stream_only: [index.number_of_shards]}
371+
- length: { data_streams.0.index_settings_results.applied_to_data_stream_and_backing_indices: 1 }
372+
- match: { data_streams.0.settings.index.number_of_shards: null }
373+
- match: { data_streams.0.settings.index.lifecycle.name: null }
374+
- match: { data_streams.0.settings.index.lifecycle.prefer_ilm: "true" }
375+
- match: { data_streams.0.effective_settings.index.number_of_shards: null }
376+
- match: { data_streams.0.effective_settings.index.number_of_replicas: "0" }
377+
- match: { data_streams.0.effective_settings.index.lifecycle.name: "my-policy" }
378+
- match: { data_streams.0.effective_settings.index.lifecycle.prefer_ilm: "true" }
379+
380+
- do:
381+
indices.get_data_stream_settings:
382+
name: my-data-stream-1
383+
- match: { data_streams.0.name: my-data-stream-1 }
384+
- match: { data_streams.0.settings.index.lifecycle.name: null }
385+
- match: { data_streams.0.settings.index.lifecycle.prefer_ilm: "true" }
386+
- match: { data_streams.0.effective_settings.index.number_of_shards: null }
387+
- match: { data_streams.0.effective_settings.index.number_of_replicas: "0" }
388+
- match: { data_streams.0.effective_settings.index.lifecycle.name: "my-policy" }
389+
390+
- do:
391+
indices.get_data_stream:
392+
name: my-data-stream-1
393+
- set: { data_streams.0.indices.0.index_name: idx0name }
394+
395+
- do:
396+
indices.get_settings:
397+
index: my-data-stream-1
398+
- match: { .$idx0name.settings.index.number_of_shards: "1" }
399+
- match: { .$idx0name.settings.index.lifecycle.name: "my-policy" }
400+
- match: { .$idx0name.settings.index.lifecycle.prefer_ilm: "true" }

muted-tests.yml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -565,9 +565,6 @@ tests:
565565
- class: org.elasticsearch.index.mapper.vectors.DenseVectorFieldMapperTests
566566
method: testExistsQueryMinimalMapping
567567
issue: https://github.com/elastic/elasticsearch/issues/129911
568-
- class: org.elasticsearch.index.mapper.DynamicMappingIT
569-
method: testDenseVectorDynamicMapping
570-
issue: https://github.com/elastic/elasticsearch/issues/129928
571568

572569
# Examples:
573570
#

server/src/internalClusterTest/java/org/elasticsearch/index/mapper/DynamicMappingIT.java

Lines changed: 31 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -920,19 +920,27 @@ public void testDenseVectorDynamicMapping() throws Exception {
920920

921921
client().index(
922922
new IndexRequest("test").source("vector_int8", Randomness.get().doubles(BBQ_DIMS_DEFAULT_THRESHOLD - 1, 0.0, 5.0).toArray())
923+
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
923924
).get();
924925
client().index(
925926
new IndexRequest("test").source("vector_bbq", Randomness.get().doubles(BBQ_DIMS_DEFAULT_THRESHOLD, 0.0, 5.0).toArray())
927+
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
926928
).get();
927-
Map<String, Object> mappings = indicesAdmin().prepareGetMappings(TEST_REQUEST_TIMEOUT, "test")
928-
.get()
929-
.mappings()
930-
.get("test")
931-
.sourceAsMap();
932-
assertTrue(new WriteField("properties.vector_int8", () -> mappings).exists());
933-
assertTrue(new WriteField("properties.vector_int8.index_options.type", () -> mappings).get(null).toString().equals("int8_hnsw"));
934-
assertTrue(new WriteField("properties.vector_bbq", () -> mappings).exists());
935-
assertTrue(new WriteField("properties.vector_bbq.index_options.type", () -> mappings).get(null).toString().equals("bbq_hnsw"));
929+
930+
assertBusy(() -> {
931+
Map<String, Object> mappings = indicesAdmin().prepareGetMappings(TEST_REQUEST_TIMEOUT, "test")
932+
.get()
933+
.mappings()
934+
.get("test")
935+
.sourceAsMap();
936+
937+
assertTrue(new WriteField("properties.vector_int8", () -> mappings).exists());
938+
assertTrue(
939+
new WriteField("properties.vector_int8.index_options.type", () -> mappings).get(null).toString().equals("int8_hnsw")
940+
);
941+
assertTrue(new WriteField("properties.vector_bbq", () -> mappings).exists());
942+
assertTrue(new WriteField("properties.vector_bbq.index_options.type", () -> mappings).get(null).toString().equals("bbq_hnsw"));
943+
});
936944
}
937945

938946
public void testBBQDynamicMappingWhenFirstIngestingDoc() throws Exception {
@@ -954,14 +962,19 @@ public void testBBQDynamicMappingWhenFirstIngestingDoc() throws Exception {
954962
assertTrue(new WriteField("properties.vector", () -> mappings).exists());
955963
assertFalse(new WriteField("properties.vector.index_options.type", () -> mappings).exists());
956964

957-
client().index(new IndexRequest("test").source("vector", Randomness.get().doubles(BBQ_DIMS_DEFAULT_THRESHOLD, 0.0, 5.0).toArray()))
958-
.get();
959-
Map<String, Object> updatedMappings = indicesAdmin().prepareGetMappings(TEST_REQUEST_TIMEOUT, "test")
960-
.get()
961-
.mappings()
962-
.get("test")
963-
.sourceAsMap();
964-
assertTrue(new WriteField("properties.vector", () -> updatedMappings).exists());
965-
assertTrue(new WriteField("properties.vector.index_options.type", () -> updatedMappings).get(null).toString().equals("bbq_hnsw"));
965+
client().index(
966+
new IndexRequest("test").source("vector", Randomness.get().doubles(BBQ_DIMS_DEFAULT_THRESHOLD, 0.0, 5.0).toArray())
967+
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
968+
).get();
969+
970+
assertBusy(() -> {
971+
Map<String, Object> updatedMappings = indicesAdmin().prepareGetMappings(TEST_REQUEST_TIMEOUT, "test")
972+
.get()
973+
.mappings()
974+
.get("test")
975+
.sourceAsMap();
976+
assertTrue(new WriteField("properties.vector", () -> updatedMappings).exists());
977+
assertTrue(new WriteField("properties.vector.index_options.type", () -> updatedMappings).get("").toString().equals("bbq_hnsw"));
978+
});
966979
}
967980
}

server/src/main/java/org/elasticsearch/TransportVersions.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -315,7 +315,8 @@ static TransportVersion def(int id) {
315315
public static final TransportVersion ML_INFERENCE_CUSTOM_SERVICE_INPUT_TYPE = def(9_105_0_00);
316316
public static final TransportVersion ML_INFERENCE_SAGEMAKER_ELASTIC = def(9_106_0_00);
317317
public static final TransportVersion SPARSE_VECTOR_FIELD_PRUNING_OPTIONS = def(9_107_0_00);
318-
public static final TransportVersion ML_INFERENCE_ELASTIC_DENSE_TEXT_EMBEDDINGS_ADDED = def(9_108_00_0);
318+
public static final TransportVersion CLUSTER_STATE_PROJECTS_SETTINGS = def(9_108_0_00);
319+
public static final TransportVersion ML_INFERENCE_ELASTIC_DENSE_TEXT_EMBEDDINGS_ADDED = def(9_109_00_0);
319320

320321
/*
321322
* STOP! READ THIS FIRST! No, really,

server/src/main/java/org/elasticsearch/cluster/ClusterModule.java

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@
2828
import org.elasticsearch.cluster.metadata.RepositoriesMetadata;
2929
import org.elasticsearch.cluster.metadata.StreamsMetadata;
3030
import org.elasticsearch.cluster.project.ProjectResolver;
31+
import org.elasticsearch.cluster.project.ProjectStateRegistry;
3132
import org.elasticsearch.cluster.routing.DelayedAllocationService;
3233
import org.elasticsearch.cluster.routing.ShardRouting;
3334
import org.elasticsearch.cluster.routing.ShardRoutingRoleStrategy;
@@ -292,6 +293,7 @@ public static List<Entry> getNamedWriteables() {
292293
RegisteredPolicySnapshots::new,
293294
RegisteredPolicySnapshots.RegisteredSnapshotsDiff::new
294295
);
296+
registerClusterCustom(entries, ProjectStateRegistry.TYPE, ProjectStateRegistry::new, ProjectStateRegistry::readDiffFrom);
295297
// Secrets
296298
registerClusterCustom(entries, ClusterSecrets.TYPE, ClusterSecrets::new, ClusterSecrets::readDiffFrom);
297299
registerProjectCustom(entries, ProjectSecrets.TYPE, ProjectSecrets::new, ProjectSecrets::readDiffFrom);

0 commit comments

Comments
 (0)