Skip to content

Commit 076f623

Browse files
Merge b769059 into openjdk23-bundle
2 parents 4c58073 + b769059 commit 076f623

File tree

98 files changed

+2990
-1593
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

98 files changed

+2990
-1593
lines changed

docs/changelog/112933.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 112933
2+
summary: "Allow incubating Panama Vector in simdvec, and add vectorized `ipByteBin`"
3+
area: Search
4+
type: enhancement
5+
issues: []

docs/changelog/113251.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 113251
2+
summary: Span term query to convert to match no docs when unmapped field is targeted
3+
area: Search
4+
type: bug
5+
issues: []

docs/changelog/113900.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 113900
2+
summary: Fix BWC for file-settings based role mappings
3+
area: Authentication
4+
type: bug
5+
issues: []

docs/changelog/114177.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 114177
2+
summary: "Make `randomInstantBetween` always return value in range [minInstant, `maxInstant]`"
3+
area: Infra/Metrics
4+
type: bug
5+
issues: []

docs/reference/connector/apis/create-connector-api.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ PUT _connector/my-connector
116116
"name": "My Connector",
117117
"description": "My Connector to sync data to Elastic index from Google Drive",
118118
"service_type": "google_drive",
119-
"language": "english"
119+
"language": "en"
120120
}
121121
----
122122

docs/reference/connector/docs/connectors-zoom.asciidoc

Lines changed: 24 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -63,18 +63,22 @@ To connect to Zoom you need to https://developers.zoom.us/docs/internal-apps/s2s
6363
6. Click on the "Create" button to create the app registration.
6464
7. After the registration is complete, you will be redirected to the app's overview page. Take note of the "App Credentials" value, as you'll need it later.
6565
8. Navigate to the "Scopes" section and click on the "Add Scopes" button.
66-
9. The following scopes need to be added to the app.
66+
9. The following granular scopes need to be added to the app.
6767
+
6868
[source,bash]
6969
----
70-
user:read:admin
71-
meeting:read:admin
72-
chat_channel:read:admin
73-
recording:read:admin
74-
chat_message:read:admin
75-
report:read:admin
70+
user:read:list_users:admin
71+
meeting:read:list_meetings:admin
72+
meeting:read:list_past_participants:admin
73+
cloud_recording:read:list_user_recordings:admin
74+
team_chat:read:list_user_channels:admin
75+
team_chat:read:list_user_messages:admin
7676
----
77-
77+
[NOTE]
78+
====
79+
The connector requires a minimum scope of `user:read:list_users:admin` to ingest data into Elasticsearch.
80+
====
81+
+
7882
10. Click on the "Done" button to add the selected scopes to your app.
7983
11. Navigate to the "Activation" section and input the necessary information to activate the app.
8084
@@ -220,18 +224,22 @@ To connect to Zoom you need to https://developers.zoom.us/docs/internal-apps/s2s
220224
6. Click on the "Create" button to create the app registration.
221225
7. After the registration is complete, you will be redirected to the app's overview page. Take note of the "App Credentials" value, as you'll need it later.
222226
8. Navigate to the "Scopes" section and click on the "Add Scopes" button.
223-
9. The following scopes need to be added to the app.
227+
9. The following granular scopes need to be added to the app.
224228
+
225229
[source,bash]
226230
----
227-
user:read:admin
228-
meeting:read:admin
229-
chat_channel:read:admin
230-
recording:read:admin
231-
chat_message:read:admin
232-
report:read:admin
231+
user:read:list_users:admin
232+
meeting:read:list_meetings:admin
233+
meeting:read:list_past_participants:admin
234+
cloud_recording:read:list_user_recordings:admin
235+
team_chat:read:list_user_channels:admin
236+
team_chat:read:list_user_messages:admin
233237
----
234-
238+
[NOTE]
239+
====
240+
The connector requires a minimum scope of `user:read:list_users:admin` to ingest data into Elasticsearch.
241+
====
242+
+
235243
10. Click on the "Done" button to add the selected scopes to your app.
236244
11. Navigate to the "Activation" section and input the necessary information to activate the app.
237245

docs/reference/ingest/processors/inference.asciidoc

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -169,6 +169,18 @@ include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenizatio
169169
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
170170
=======
171171
172+
`deberta_v2`::::
173+
(Optional, object)
174+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-deberta-v2]
175+
+
176+
.Properties of deberta_v2
177+
[%collapsible%open]
178+
=======
179+
`truncate`::::
180+
(Optional, string)
181+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate-deberta-v2]
182+
=======
183+
172184
`roberta`::::
173185
(Optional, object)
174186
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
@@ -224,6 +236,18 @@ include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenizatio
224236
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
225237
=======
226238
239+
`deberta_v2`::::
240+
(Optional, object)
241+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-deberta-v2]
242+
+
243+
.Properties of deberta_v2
244+
[%collapsible%open]
245+
=======
246+
`truncate`::::
247+
(Optional, string)
248+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate-deberta-v2]
249+
=======
250+
227251
`roberta`::::
228252
(Optional, object)
229253
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
@@ -304,6 +328,23 @@ include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenizatio
304328
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
305329
=======
306330
331+
`deberta_v2`::::
332+
(Optional, object)
333+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-deberta-v2]
334+
+
335+
.Properties of deberta_v2
336+
[%collapsible%open]
337+
=======
338+
`span`::::
339+
(Optional, integer)
340+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
341+
342+
`truncate`::::
343+
(Optional, string)
344+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate-deberta-v2]
345+
=======
346+
347+
307348
`roberta`::::
308349
(Optional, object)
309350
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
@@ -363,6 +404,18 @@ include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenizatio
363404
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
364405
=======
365406
407+
`deberta_v2`::::
408+
(Optional, object)
409+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-deberta-v2]
410+
+
411+
.Properties of deberta_v2
412+
[%collapsible%open]
413+
=======
414+
`truncate`::::
415+
(Optional, string)
416+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate-deberta-v2]
417+
=======
418+
366419
`roberta`::::
367420
(Optional, object)
368421
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
@@ -424,6 +477,22 @@ include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenizatio
424477
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
425478
=======
426479
480+
`deberta_v2`::::
481+
(Optional, object)
482+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-deberta-v2]
483+
+
484+
.Properties of deberta_v2
485+
[%collapsible%open]
486+
=======
487+
`span`::::
488+
(Optional, integer)
489+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-span]
490+
491+
`truncate`::::
492+
(Optional, string)
493+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate-deberta-v2]
494+
=======
495+
427496
`roberta`::::
428497
(Optional, object)
429498
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]
@@ -515,6 +584,18 @@ include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenizatio
515584
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate]
516585
=======
517586
587+
`deberta_v2`::::
588+
(Optional, object)
589+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-deberta-v2]
590+
+
591+
.Properties of deberta_v2
592+
[%collapsible%open]
593+
=======
594+
`truncate`::::
595+
(Optional, string)
596+
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-truncate-deberta-v2]
597+
=======
598+
518599
`roberta`::::
519600
(Optional, object)
520601
include::{es-ref-dir}/ml/ml-shared.asciidoc[tag=inference-config-nlp-tokenization-roberta]

docs/reference/mapping/runtime.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -821,8 +821,6 @@ address.
821821
[[lookup-runtime-fields]]
822822
==== Retrieve fields from related indices
823823

824-
experimental[]
825-
826824
The <<search-fields,`fields`>> parameter on the `_search` API can also be used to retrieve fields from
827825
the related indices via runtime fields with a type of `lookup`.
828826

docs/reference/mapping/types/date.asciidoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -125,8 +125,7 @@ The following parameters are accepted by `date` fields:
125125
`locale`::
126126

127127
The locale to use when parsing dates since months do not have the same names
128-
and/or abbreviations in all languages. The default is the
129-
https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html#ROOT[`ROOT` locale].
128+
and/or abbreviations in all languages. The default is ENGLISH.
130129

131130
<<ignore-malformed,`ignore_malformed`>>::
132131

docs/reference/ml/ml-shared.asciidoc

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -988,6 +988,7 @@ values are
988988
+
989989
--
990990
* `bert`: Use for BERT-style models
991+
* `deberta_v2`: Use for DeBERTa v2 and v3-style models
991992
* `mpnet`: Use for MPNet-style models
992993
* `roberta`: Use for RoBERTa-style and BART-style models
993994
* experimental:[] `xlm_roberta`: Use for XLMRoBERTa-style models
@@ -1037,6 +1038,19 @@ sequence. Therefore, do not use `second` in this case.
10371038

10381039
end::inference-config-nlp-tokenization-truncate[]
10391040

1041+
tag::inference-config-nlp-tokenization-truncate-deberta-v2[]
1042+
Indicates how tokens are truncated when they exceed `max_sequence_length`.
1043+
The default value is `first`.
1044+
+
1045+
--
1046+
* `balanced`: One or both of the first and second sequences may be truncated so as to balance the tokens included from both sequences.
1047+
* `none`: No truncation occurs; the inference request receives an error.
1048+
* `first`: Only the first sequence is truncated.
1049+
* `second`: Only the second sequence is truncated. If there is just one sequence, that sequence is truncated.
1050+
--
1051+
1052+
end::inference-config-nlp-tokenization-truncate-deberta-v2[]
1053+
10401054
tag::inference-config-nlp-tokenization-bert-with-special-tokens[]
10411055
Tokenize with special tokens. The tokens typically included in BERT-style tokenization are:
10421056
+
@@ -1050,10 +1064,23 @@ tag::inference-config-nlp-tokenization-bert-ja-with-special-tokens[]
10501064
Tokenize with special tokens if `true`.
10511065
end::inference-config-nlp-tokenization-bert-ja-with-special-tokens[]
10521066

1067+
tag::inference-config-nlp-tokenization-deberta-v2[]
1068+
DeBERTa-style tokenization is to be performed with the enclosed settings.
1069+
end::inference-config-nlp-tokenization-deberta-v2[]
1070+
10531071
tag::inference-config-nlp-tokenization-max-sequence-length[]
10541072
Specifies the maximum number of tokens allowed to be output by the tokenizer.
10551073
end::inference-config-nlp-tokenization-max-sequence-length[]
10561074

1075+
tag::inference-config-nlp-tokenization-deberta-v2-with-special-tokens[]
1076+
Tokenize with special tokens. The tokens typically included in DeBERTa-style tokenization are:
1077+
+
1078+
--
1079+
* `[CLS]`: The first token of the sequence being classified.
1080+
* `[SEP]`: Indicates sequence separation and sequence end.
1081+
--
1082+
end::inference-config-nlp-tokenization-deberta-v2-with-special-tokens[]
1083+
10571084
tag::inference-config-nlp-tokenization-roberta[]
10581085
RoBERTa-style tokenization is to be performed with the enclosed settings.
10591086
end::inference-config-nlp-tokenization-roberta[]

0 commit comments

Comments
 (0)