Skip to content

Commit 9a59c04

Browse files
Merge branch 'main' into feature/logsdb-ignore-dynamic-beyond-limit
2 parents a440693 + e7897bd commit 9a59c04

File tree

71 files changed

+1877
-1353
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

71 files changed

+1877
-1353
lines changed

docs/changelog/113975.yaml

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
pr: 113975
2+
summary: JDK locale database change
3+
area: Mapping
4+
type: breaking
5+
issues: []
6+
breaking:
7+
title: JDK locale database change
8+
area: Mapping
9+
details: |
10+
{es} 8.16 changes the version of the JDK that is included from version 22 to version 23. This changes the locale database that is used by Elasticsearch from the COMPAT database to the CLDR database. This change can cause significant differences to the textual date formats accepted by Elasticsearch, and to calculated week-dates.
11+
12+
If you run {es} 8.16 on JDK version 22 or below, it will use the COMPAT locale database to match the behavior of 8.15. However, starting with {es} 9.0, {es} will use the CLDR database regardless of JDK version it is run on.
13+
impact: |
14+
This affects you if you use custom date formats using textual or week-date field specifiers. If you use date fields or calculated week-dates that change between the COMPAT and CLDR databases, then this change will cause Elasticsearch to reject previously valid date fields as invalid data. You might need to modify your ingest or output integration code to account for the differences between these two JDK versions.
15+
16+
Starting in version 8.15.2, Elasticsearch will log deprecation warnings if you are using date format specifiers that might change on upgrading to JDK 23. These warnings are visible in Kibana.
17+
18+
For detailed guidance, refer to <<custom-date-format-locales,Differences in locale information between JDK versions>> and the https://ela.st/jdk-23-locales[Elastic blog].
19+
notable: true

docs/changelog/114665.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
pr: 114665
2+
summary: Fixing remote ENRICH by pushing the Enrich inside `FragmentExec`
3+
area: ES|QL
4+
type: bug
5+
issues:
6+
- 105095

docs/changelog/115314.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 115314
2+
summary: Only aggregations require at least one shard request
3+
area: Search
4+
type: enhancement
5+
issues: []

docs/changelog/115594.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
pr: 115594
2+
summary: Update `BlobCacheBufferedIndexInput::readVLong` to correctly handle negative
3+
long values
4+
area: Search
5+
type: bug
6+
issues: []

docs/reference/inference/inference-apis.asciidoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ the following APIs to manage {infer} models and perform {infer}:
1919
* <<get-inference-api>>
2020
* <<post-inference-api>>
2121
* <<put-inference-api>>
22+
* <<stream-inference-api>>
2223
* <<update-inference-api>>
2324

2425
[[inference-landscape]]
@@ -56,6 +57,7 @@ include::delete-inference.asciidoc[]
5657
include::get-inference.asciidoc[]
5758
include::post-inference.asciidoc[]
5859
include::put-inference.asciidoc[]
60+
include::stream-inference.asciidoc[]
5961
include::update-inference.asciidoc[]
6062
include::service-alibabacloud-ai-search.asciidoc[]
6163
include::service-amazon-bedrock.asciidoc[]
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
[role="xpack"]
2+
[[stream-inference-api]]
3+
=== Stream inference API
4+
5+
Streams a chat completion response.
6+
7+
IMPORTANT: The {infer} APIs enable you to use certain services, such as built-in {ml} models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
8+
For built-in models and models uploaded through Eland, the {infer} APIs offer an alternative way to use and manage trained models.
9+
However, if you do not plan to use the {infer} APIs to use these models or if you want to use non-NLP models, use the <<ml-df-trained-models-apis>>.
10+
11+
12+
[discrete]
13+
[[stream-inference-api-request]]
14+
==== {api-request-title}
15+
16+
`POST /_inference/<inference_id>/_stream`
17+
18+
`POST /_inference/<task_type>/<inference_id>/_stream`
19+
20+
21+
[discrete]
22+
[[stream-inference-api-prereqs]]
23+
==== {api-prereq-title}
24+
25+
* Requires the `monitor_inference` <<privileges-list-cluster,cluster privilege>>
26+
(the built-in `inference_admin` and `inference_user` roles grant this privilege)
27+
* You must use a client that supports streaming.
28+
29+
30+
[discrete]
31+
[[stream-inference-api-desc]]
32+
==== {api-description-title}
33+
34+
The stream {infer} API enables real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation.
35+
It only works with the `completion` task type.
36+
37+
38+
[discrete]
39+
[[stream-inference-api-path-params]]
40+
==== {api-path-parms-title}
41+
42+
`<inference_id>`::
43+
(Required, string)
44+
The unique identifier of the {infer} endpoint.
45+
46+
47+
`<task_type>`::
48+
(Optional, string)
49+
The type of {infer} task that the model performs.
50+
51+
52+
[discrete]
53+
[[stream-inference-api-request-body]]
54+
==== {api-request-body-title}
55+
56+
`input`::
57+
(Required, string or array of strings)
58+
The text on which you want to perform the {infer} task.
59+
`input` can be a single string or an array.
60+
+
61+
--
62+
[NOTE]
63+
====
64+
Inference endpoints for the `completion` task type currently only support a
65+
single string as input.
66+
====
67+
--
68+
69+
70+
[discrete]
71+
[[stream-inference-api-example]]
72+
==== {api-examples-title}
73+
74+
The following example performs a completion on the example question with streaming.
75+
76+
77+
[source,console]
78+
------------------------------------------------------------
79+
POST _inference/completion/openai-completion/_stream
80+
{
81+
"input": "What is Elastic?"
82+
}
83+
------------------------------------------------------------
84+
// TEST[skip:TBD]
85+
86+
87+
The API returns the following response:
88+
89+
90+
[source,txt]
91+
------------------------------------------------------------
92+
event: message
93+
data: {
94+
"completion":[{
95+
"delta":"Elastic"
96+
}]
97+
}
98+
99+
event: message
100+
data: {
101+
"completion":[{
102+
"delta":" is"
103+
},
104+
{
105+
"delta":" a"
106+
}
107+
]
108+
}
109+
110+
event: message
111+
data: {
112+
"completion":[{
113+
"delta":" software"
114+
},
115+
{
116+
"delta":" company"
117+
}]
118+
}
119+
120+
(...)
121+
------------------------------------------------------------
122+
// NOTCONSOLE

docs/reference/mapping/params/format.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,13 +34,13 @@ down to the nearest day.
3434
Completely customizable date formats are supported. The syntax for these is explained in
3535
https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/time/format/DateTimeFormatter.html[DateTimeFormatter docs].
3636

37-
Note that whilst the built-in formats for week dates use the ISO definition of weekyears,
37+
Note that while the built-in formats for week dates use the ISO definition of weekyears,
3838
custom formatters using the `Y`, `W`, or `w` field specifiers use the JDK locale definition
3939
of weekyears. This can result in different values between the built-in formats and custom formats
4040
for week dates.
4141

4242
[[built-in-date-formats]]
43-
==== Built In Formats
43+
==== Built-in formats
4444

4545
Most of the below formats have a `strict` companion format, which means that
4646
year, month and day parts of the month must use respectively 4, 2 and 2 digits

modules/data-streams/src/internalClusterTest/java/org/elasticsearch/datastreams/TSDBIndexingIT.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -412,7 +412,7 @@ public void testSkippingShards() throws Exception {
412412
assertResponse(client().search(searchRequest), searchResponse -> {
413413
ElasticsearchAssertions.assertNoSearchHits(searchResponse);
414414
assertThat(searchResponse.getTotalShards(), equalTo(2));
415-
assertThat(searchResponse.getSkippedShards(), equalTo(1));
415+
assertThat(searchResponse.getSkippedShards(), equalTo(2));
416416
assertThat(searchResponse.getSuccessfulShards(), equalTo(2));
417417
});
418418
}

muted-tests.yml

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,6 @@ tests:
2323
- class: org.elasticsearch.xpack.security.authz.store.NativePrivilegeStoreCacheTests
2424
method: testPopulationOfCacheWhenLoadingPrivilegesForAllApplications
2525
issue: https://github.com/elastic/elasticsearch/issues/110789
26-
- class: org.elasticsearch.xpack.searchablesnapshots.cache.common.CacheFileTests
27-
method: testCacheFileCreatedAsSparseFile
28-
issue: https://github.com/elastic/elasticsearch/issues/110801
2926
- class: org.elasticsearch.nativeaccess.VectorSystemPropertyTests
3027
method: testSystemPropertyDisabled
3128
issue: https://github.com/elastic/elasticsearch/issues/110949
@@ -276,6 +273,12 @@ tests:
276273
- class: org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT
277274
method: test {yaml=reference/esql/esql-across-clusters/line_197}
278275
issue: https://github.com/elastic/elasticsearch/issues/115575
276+
- class: org.elasticsearch.xpack.security.CoreWithSecurityClientYamlTestSuiteIT
277+
method: test {yaml=cluster.stats/30_ccs_stats/cross-cluster search stats search}
278+
issue: https://github.com/elastic/elasticsearch/issues/115600
279+
- class: org.elasticsearch.test.rest.ClientYamlTestSuiteIT
280+
method: test {yaml=indices.create/10_basic/Create lookup index}
281+
issue: https://github.com/elastic/elasticsearch/issues/115605
279282

280283
# Examples:
281284
#

qa/multi-cluster-search/src/test/java/org/elasticsearch/search/CCSDuelIT.java

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@
4343
import org.elasticsearch.rest.action.search.RestSearchAction;
4444
import org.elasticsearch.script.Script;
4545
import org.elasticsearch.script.ScriptType;
46+
import org.elasticsearch.search.aggregations.AggregationBuilders;
4647
import org.elasticsearch.search.aggregations.BucketOrder;
4748
import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;
4849
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;
@@ -580,13 +581,14 @@ public void testSortByField() throws Exception {
580581

581582
public void testSortByFieldOneClusterHasNoResults() throws Exception {
582583
assumeMultiClusterSetup();
583-
// set to a value greater than the number of shards to avoid differences due to the skipping of shards
584+
// setting aggs to avoid differences due to the skipping of shards when matching none
584585
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
585586
boolean onlyRemote = randomBoolean();
586587
sourceBuilder.query(new TermQueryBuilder("_index", onlyRemote ? REMOTE_INDEX_NAME : INDEX_NAME));
587588
sourceBuilder.sort("type.keyword", SortOrder.ASC);
588589
sourceBuilder.sort("creationDate", SortOrder.DESC);
589590
sourceBuilder.sort("user.keyword", SortOrder.ASC);
591+
sourceBuilder.aggregation(AggregationBuilders.max("max").field("creationDate"));
590592
CheckedConsumer<ObjectPath, IOException> responseChecker = response -> {
591593
assertHits(response);
592594
int size = response.evaluateArraySize("hits.hits");

0 commit comments

Comments
 (0)