diff --git a/docs/reference/api-reference.md b/docs/reference/api-reference.md index 399c2a81c..179415f8c 100644 --- a/docs/reference/api-reference.md +++ b/docs/reference/api-reference.md @@ -121,6 +121,9 @@ Imagine a `_bulk?refresh=wait_for` request with three documents in it that happe The request will only wait for those three shards to refresh. The other two shards that make up the index do not participate in the `_bulk` request at all. +You might want to disable the refresh interval temporarily to improve indexing throughput for large bulk requests. +Refer to the linked documentation for step-by-step instructions using the index settings API. + [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-bulk) ```ts @@ -1242,7 +1245,7 @@ client.openPointInTime({ index, keep_alive }) - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`preference` (Optional, string)**: The node or shard the operation should be performed on. By default, it is random. - **`routing` (Optional, string)**: A custom value that is used to route operations to a specific shard. -- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such as `open,hidden`. Valid values are: `all`, `open`, `closed`, `hidden`, `none`. +- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such as `open,hidden`. - **`allow_partial_search_results` (Optional, boolean)**: Indicates whether the point in time tolerates unavailable shards or shard failures when initially creating the PIT. If `false`, creating a point in time request when a shard is missing or unavailable will throw an exception. If `true`, the point in time will contain all the shards that are available at the time of the request. - **`max_concurrent_shard_requests` (Optional, number)**: Maximum number of concurrent shard requests that each sub-search request executes per node. @@ -1339,147 +1342,7 @@ In this case, the response includes a count of the version conflicts that were e Note that the handling of other error types is unaffected by the `conflicts` property. Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than `max_docs` until it has successfully indexed `max_docs` documents into the target or it has gone through every document in the source query. -NOTE: The reindex API makes no effort to handle ID collisions. -The last document written will "win" but the order isn't usually predictable so it is not a good idea to rely on this behavior. -Instead, make sure that IDs are unique by using a script. - -**Running reindex asynchronously** - -If the request contains `wait_for_completion=false`, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. -Elasticsearch creates a record of this task as a document at `_tasks/`. - -**Reindex from multiple sources** - -If you have many sources to reindex it is generally better to reindex them one at a time rather than using a glob pattern to pick up multiple sources. -That way you can resume the process if there are any errors by removing the partially completed source and starting over. -It also makes parallelizing the process fairly simple: split the list of sources to reindex and run each list in parallel. - -For example, you can use a bash script like this: - -``` -for index in i1 i2 i3 i4 i5; do - curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty -d'{ - "source": { - "index": "'$index'" - }, - "dest": { - "index": "'$index'-reindexed" - } - }' -done -``` - -**Throttling** - -Set `requests_per_second` to any positive decimal number (`1.4`, `6`, `1000`, for example) to throttle the rate at which reindex issues batches of index operations. -Requests are throttled by padding each batch with a wait time. -To turn off throttling, set `requests_per_second` to `-1`. - -The throttling is done by waiting between batches so that the scroll that reindex uses internally can be given a timeout that takes into account the padding. -The padding time is the difference between the batch size divided by the `requests_per_second` and the time spent writing. -By default the batch size is `1000`, so if `requests_per_second` is set to `500`: - -``` -target_time = 1000 / 500 per second = 2 seconds -wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds -``` - -Since the batch is issued as a single bulk request, large batch sizes cause Elasticsearch to create many requests and then wait for a while before starting the next set. -This is "bursty" instead of "smooth". - -**Slicing** - -Reindex supports sliced scroll to parallelize the reindexing process. -This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts. - -NOTE: Reindexing from remote clusters does not support manual or automatic slicing. - -You can slice a reindex request manually by providing a slice ID and total number of slices to each request. -You can also let reindex automatically parallelize by using sliced scroll to slice on `_id`. -The `slices` parameter specifies the number of slices to use. - -Adding `slices` to the reindex request just automates the manual process, creating sub-requests which means it has some quirks: - -* You can see these requests in the tasks API. These sub-requests are "child" tasks of the task for the request with slices. -* Fetching the status of the task for the request with `slices` only contains the status of completed slices. -* These sub-requests are individually addressable for things like cancellation and rethrottling. -* Rethrottling the request with `slices` will rethrottle the unfinished sub-request proportionally. -* Canceling the request with `slices` will cancel each sub-request. -* Due to the nature of `slices`, each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. -* Parameters like `requests_per_second` and `max_docs` on a request with `slices` are distributed proportionally to each sub-request. Combine that with the previous point about distribution being uneven and you should conclude that using `max_docs` with `slices` might not result in exactly `max_docs` documents being reindexed. -* Each sub-request gets a slightly different snapshot of the source, though these are all taken at approximately the same time. - -If slicing automatically, setting `slices` to `auto` will choose a reasonable number for most indices. -If slicing manually or otherwise tuning automatic slicing, use the following guidelines. - -Query performance is most efficient when the number of slices is equal to the number of shards in the index. -If that number is large (for example, `500`), choose a lower number as too many slices will hurt performance. -Setting slices higher than the number of shards generally does not improve efficiency and adds overhead. - -Indexing performance scales linearly across available resources with the number of slices. - -Whether query or indexing performance dominates the runtime depends on the documents being reindexed and cluster resources. - -**Modify documents during reindexing** - -Like `_update_by_query`, reindex operations support a script that modifies the document. -Unlike `_update_by_query`, the script is allowed to modify the document's metadata. - -Just as in `_update_by_query`, you can set `ctx.op` to change the operation that is run on the destination. -For example, set `ctx.op` to `noop` if your script decides that the document doesn’t have to be indexed in the destination. This "no operation" will be reported in the `noop` counter in the response body. -Set `ctx.op` to `delete` if your script decides that the document must be deleted from the destination. -The deletion will be reported in the `deleted` counter in the response body. -Setting `ctx.op` to anything else will return an error, as will setting any other field in `ctx`. - -Think of the possibilities! Just be careful; you are able to change: - -* `_id` -* `_index` -* `_version` -* `_routing` - -Setting `_version` to `null` or clearing it from the `ctx` map is just like not sending the version in an indexing request. -It will cause the document to be overwritten in the destination regardless of the version on the target or the version type you use in the reindex API. - -**Reindex from remote** - -Reindex supports reindexing from a remote Elasticsearch cluster. -The `host` parameter must contain a scheme, host, port, and optional path. -The `username` and `password` parameters are optional and when they are present the reindex operation will connect to the remote Elasticsearch node using basic authentication. -Be sure to use HTTPS when using basic authentication or the password will be sent in plain text. -There are a range of settings available to configure the behavior of the HTTPS connection. - -When using Elastic Cloud, it is also possible to authenticate against the remote cluster through the use of a valid API key. -Remote hosts must be explicitly allowed with the `reindex.remote.whitelist` setting. -It can be set to a comma delimited list of allowed remote host and port combinations. -Scheme is ignored; only the host and port are used. -For example: - -``` -reindex.remote.whitelist: [otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"] -``` - -The list of allowed hosts must be configured on any nodes that will coordinate the reindex. -This feature should work with remote clusters of any version of Elasticsearch. -This should enable you to upgrade from any version of Elasticsearch to the current version by reindexing from a cluster of the old version. - -WARNING: Elasticsearch does not support forward compatibility across major versions. -For example, you cannot reindex from a 7.x cluster into a 6.x cluster. - -To enable queries sent to older versions of Elasticsearch, the `query` parameter is sent directly to the remote host without validation or modification. - -NOTE: Reindexing from remote clusters does not support manual or automatic slicing. - -Reindexing from a remote server uses an on-heap buffer that defaults to a maximum size of 100mb. -If the remote index includes very large documents you'll need to use a smaller batch size. -It is also possible to set the socket read timeout on the remote connection with the `socket_timeout` field and the connection timeout with the `connect_timeout` field. -Both default to 30 seconds. - -**Configuring SSL parameters** - -Reindex from remote supports configurable SSL settings. -These must be specified in the `elasticsearch.yml` file, with the exception of the secure settings, which you add in the Elasticsearch keystore. -It is not possible to configure SSL in the body of the reindex request. +Refer to the linked documentation for examples of how to reindex documents. [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-reindex) @@ -1886,7 +1749,7 @@ client.searchShards({ ... }) - **`index` (Optional, string \| string[])**: A list of data streams, indices, and aliases to search. It supports wildcards (`*`). To search all data streams and indices, omit this parameter or use `*` or `_all`. - **`allow_no_indices` (Optional, boolean)**: If `false`, the request returns an error if any wildcard expression, index alias, or `_all` value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting `foo*,bar*` returns an error if an index starts with `foo` but no index starts with `bar`. -- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. Valid values are: `all`, `open`, `closed`, `hidden`, `none`. +- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`local` (Optional, boolean)**: If `true`, the request retrieves information from the local node only. - **`master_timeout` (Optional, string \| -1 \| 0)**: The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. IT can also be set to `-1` to indicate that the request should never timeout. @@ -1913,7 +1776,7 @@ client.searchTemplate({ ... }) - **`source` (Optional, string \| { aggregations, collapse, explain, ext, from, highlight, track_total_hits, indices_boost, docvalue_fields, knn, rank, min_score, post_filter, profile, query, rescore, retriever, script_fields, search_after, size, slice, sort, _source, fields, suggest, terminate_after, timeout, track_scores, version, seq_no_primary_term, stored_fields, pit, runtime_mappings, stats })**: An inline search template. Supports the same parameters as the search API's request body. It also supports Mustache variables. If no `id` is specified, this parameter is required. - **`allow_no_indices` (Optional, boolean)**: If `false`, the request returns an error if any wildcard expression, index alias, or `_all` value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting `foo*,bar*` returns an error if an index starts with `foo` but no index starts with `bar`. - **`ccs_minimize_roundtrips` (Optional, boolean)**: If `true`, network round-trips are minimized for cross-cluster search requests. -- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. Valid values are: `all`, `open`, `closed`, `hidden`, `none`. +- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. - **`ignore_throttled` (Optional, boolean)**: If `true`, specified concrete, expanded, or aliased indices are not included in the response when throttled. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`preference` (Optional, string)**: The node or shard the operation should be performed on. It is random by default. @@ -1992,6 +1855,7 @@ The information is only retrieved for the shard the requested document resides i The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use `routing` only to hit a particular shard. +Refer to the linked documentation for detailed examples of how to use this API. [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-termvectors) @@ -2177,7 +2041,7 @@ client.updateByQuery({ index }) - **`analyze_wildcard` (Optional, boolean)**: If `true`, wildcard and prefix queries are analyzed. This parameter can be used only when the `q` query string parameter is specified. - **`default_operator` (Optional, Enum("and" \| "or"))**: The default operator for query string query: `AND` or `OR`. This parameter can be used only when the `q` query string parameter is specified. - **`df` (Optional, string)**: The field to use as default where no field prefix is given in the query string. This parameter can be used only when the `q` query string parameter is specified. -- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such as `open,hidden`. Valid values are: `all`, `open`, `closed`, `hidden`, `none`. +- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such as `open,hidden`. - **`from` (Optional, number)**: Skips the specified number of documents. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`lenient` (Optional, boolean)**: If `true`, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the `q` query string parameter is specified. @@ -3537,6 +3401,7 @@ Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise. +Refer to the linked documentation for examples of how to troubleshoot allocation issues using this API. [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-cluster-allocation-explain) @@ -3849,8 +3714,8 @@ client.cluster.putSettings({ ... }) ### Arguments [_arguments_cluster.put_settings] #### Request (object) [_request_cluster.put_settings] -- **`persistent` (Optional, Record)** -- **`transient` (Optional, Record)** +- **`persistent` (Optional, Record)**: The settings that persist after the cluster restarts. +- **`transient` (Optional, Record)**: The settings that do not persist after the cluster restarts. - **`flat_settings` (Optional, boolean)**: Return settings in flat format (default: false) - **`master_timeout` (Optional, string \| -1 \| 0)**: Explicit operation timeout for connection to master node - **`timeout` (Optional, string \| -1 \| 0)**: Explicit operation timeout @@ -4777,17 +4642,17 @@ count. By default, the request waits for 1 second for the query results. If the query completes during this period, results are returned Otherwise, a query ID is returned that can later be used to retrieve the results. -- **`delimiter` (Optional, string)**: The character to use between values within a CSV row. -It is valid only for the CSV format. -- **`drop_null_columns` (Optional, boolean)**: Indicates whether columns that are entirely `null` will be removed from the `columns` and `values` portion of the results. -If `true`, the response will include an extra section under the name `all_columns` which has the name of all the columns. -- **`format` (Optional, Enum("csv" \| "json" \| "tsv" \| "txt" \| "yaml" \| "cbor" \| "smile" \| "arrow"))**: A short version of the Accept header, for example `json` or `yaml`. - **`keep_alive` (Optional, string \| -1 \| 0)**: The period for which the query and its results are stored in the cluster. The default period is five days. When this period expires, the query and its results are deleted, even if the query is still ongoing. If the `keep_on_completion` parameter is false, Elasticsearch only stores async queries that do not complete within the period set by the `wait_for_completion_timeout` parameter, regardless of this value. - **`keep_on_completion` (Optional, boolean)**: Indicates whether the query and its results are stored in the cluster. If false, the query and its results are stored in the cluster only if the request does not complete during the period set by the `wait_for_completion_timeout` parameter. +- **`delimiter` (Optional, string)**: The character to use between values within a CSV row. +It is valid only for the CSV format. +- **`drop_null_columns` (Optional, boolean)**: Indicates whether columns that are entirely `null` will be removed from the `columns` and `values` portion of the results. +If `true`, the response will include an extra section under the name `all_columns` which has the name of all the columns. +- **`format` (Optional, Enum("csv" \| "json" \| "tsv" \| "txt" \| "yaml" \| "cbor" \| "smile" \| "arrow"))**: A short version of the Accept header, for example `json` or `yaml`. ## client.esql.asyncQueryDelete [_esql.async_query_delete] Delete an async ES|QL query. @@ -5464,7 +5329,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`fielddata` (Optional, boolean)**: If `true`, clears the fields cache. Use the `fields` parameter to clear the cache of specific fields only. - **`fields` (Optional, string \| string[])**: List of field names used to limit the `fielddata` parameter. @@ -5574,7 +5438,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -5725,7 +5588,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -5903,7 +5765,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`flat_settings` (Optional, boolean)**: If `true`, returns settings in flat format. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`include_defaults` (Optional, boolean)**: If `true`, return all default settings in the response. @@ -5931,7 +5792,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, requests that include a missing data stream or index in the target indices or data streams return an error. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -6054,7 +5914,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`force` (Optional, boolean)**: If `true`, the request forces a flush even if there are no changes to commit to the index. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`wait_if_ongoing` (Optional, boolean)**: If `true`, the flush operation blocks until execution when another flush operation is running. @@ -6186,7 +6045,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -6210,7 +6068,6 @@ Supports wildcards (`*`). To target all data streams, omit this parameter or use `*` or `_all`. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of data stream that wildcard patterns can match. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`include_defaults` (Optional, boolean)**: If `true`, return all default settings in the response. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -6273,7 +6130,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`include_defaults` (Optional, boolean)**: If `true`, return all default settings in the response. - **`local` (Optional, boolean)**: If `true`, the request retrieves information from the local node only. @@ -6318,7 +6174,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`local` (Optional, boolean)**: If `true`, the request retrieves information from the local node only. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. @@ -6501,7 +6356,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -6596,7 +6450,6 @@ When empty, every document in this data stream will be stored indefinitely. that's disabled (enabled: `false`) will have no effect on the data stream. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of data stream that wildcard patterns can match. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `hidden`, `open`, `closed`, `none`. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -6679,33 +6532,17 @@ If no response is received before the timeout expires, the request fails and ret ## client.indices.putMapping [_indices.put_mapping] Update field mappings. Add new fields to an existing data stream or index. -You can also use this API to change the search settings of existing fields and add new properties to existing object fields. -For data streams, these changes are applied to all backing indices by default. - -**Add multi-fields to an existing field** - -Multi-fields let you index the same field in different ways. -You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. -WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. -You can populate the new multi-field with the update by query API. - -**Change supported mapping parameters for an existing field** +You can use the update mapping API to: -The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. -For example, you can use the update mapping API to update the `ignore_above` parameter. +- Add a new field to an existing index +- Update mappings for multiple indices in a single request +- Add new properties to an object field +- Enable multi-fields for an existing field +- Update supported mapping parameters +- Change a field's mapping using reindexing +- Rename a field using a field alias -**Change the mapping of an existing field** - -Except for supported mapping parameters, you can't change the mapping or field type of an existing field. -Changing an existing field could invalidate data that's already indexed. - -If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. -If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index. - -**Rename a field** - -Renaming a field would invalidate data already indexed under the old field name. -Instead, add an alias field to create an alternate field name. +Learn how to use the update mapping API with practical examples in the [Update mapping API examples](https://www.elastic.co/docs//manage-data/data-store/mapping/update-mappings-examples) guide. [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-put-mapping) @@ -6741,7 +6578,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. @@ -6758,7 +6594,9 @@ To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation. To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`. - There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example: +For performance optimization during bulk indexing, you can disable the refresh interval. +Refer to [disable refresh interval](https://www.elastic.co/docs/deploy-manage/production-guidance/optimize-performance/indexing-speed#disable-refresh-interval) for an example. +There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example: ``` { @@ -6802,6 +6640,7 @@ Then roll over the data stream to apply the new analyzer to the stream's write i This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it. +Refer to [updating analyzers on existing indices](https://www.elastic.co/docs/manage-data/data-store/text-analysis/specify-an-analyzer#update-analyzers-on-existing-indices) for step-by-step examples. [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-put-settings) @@ -6961,7 +6800,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. ## client.indices.reloadSearchAnalyzers [_indices.reload_search_analyzers] @@ -7065,7 +6903,6 @@ options to the `_resolve/cluster` API endpoint that takes no index expression. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the `_resolve/cluster` API endpoint that takes no index expression. - **`ignore_throttled` (Optional, boolean)**: If true, concrete, expanded, or aliased indices are ignored when frozen. @@ -7102,7 +6939,6 @@ Resources on remote clusters can be specified using the ``:`` syn - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`allow_no_indices` (Optional, boolean)**: If `false`, the request returns an error if any wildcard expression, index alias, or `_all` value targets only missing or closed indices. This behavior applies even if the request targets other open indices. @@ -7204,7 +7040,6 @@ This behavior applies even if the request targets other open indices. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. ## client.indices.shardStores [_indices.shard_stores] @@ -7496,7 +7331,6 @@ This parameter can only be used when the `q` query string parameter is specified - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as `open,hidden`. -Valid values are: `all`, `open`, `closed`, `hidden`, `none`. - **`explain` (Optional, boolean)**: If `true`, the response returns detailed information if an error has occurred. - **`ignore_unavailable` (Optional, boolean)**: If `false`, the request returns an error if it targets a missing or closed index. - **`lenient` (Optional, boolean)**: If `true`, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. @@ -7618,6 +7452,24 @@ IMPORTANT: The inference APIs enable you to use certain services, such as built- For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. +The following integrations are available through the inference API. You can find the available task types next to the integration name: +* AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`) +* Amazon Bedrock (`completion`, `text_embedding`) +* Anthropic (`completion`) +* Azure AI Studio (`completion`, `text_embedding`) +* Azure OpenAI (`completion`, `text_embedding`) +* Cohere (`completion`, `rerank`, `text_embedding`) +* Elasticsearch (`rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland) +* ELSER (`sparse_embedding`) +* Google AI Studio (`completion`, `text_embedding`) +* Google Vertex AI (`rerank`, `text_embedding`) +* Hugging Face (`text_embedding`) +* Mistral (`text_embedding`) +* OpenAI (`chat_completion`, `completion`, `text_embedding`) +* VoyageAI (`text_embedding`, `rerank`) +* Watsonx inference integration (`text_embedding`) +* JinaAI (`text_embedding`, `rerank`) + [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-put) ```ts @@ -7628,7 +7480,7 @@ client.inference.put({ inference_id }) #### Request (object) [_request_inference.put] - **`inference_id` (string)**: The inference Id -- **`task_type` (Optional, Enum("sparse_embedding" \| "text_embedding" \| "rerank" \| "completion" \| "chat_completion"))**: The task type +- **`task_type` (Optional, Enum("sparse_embedding" \| "text_embedding" \| "rerank" \| "completion" \| "chat_completion"))**: The task type. Refer to the integration list in the API description for the available task types. - **`inference_config` (Optional, { chunking_settings, service, service_settings, task_settings })** ## client.inference.putAlibabacloud [_inference.put_alibabacloud] @@ -7656,7 +7508,7 @@ These settings are specific to the task type you specified. ## client.inference.putAmazonbedrock [_inference.put_amazonbedrock] Create an Amazon Bedrock inference endpoint. -Creates an inference endpoint to perform an inference task with the `amazonbedrock` service. +Create an inference endpoint to perform an inference task with the `amazonbedrock` service. >info > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys. @@ -9835,7 +9687,7 @@ Create a datafeed. Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job. You can associate only one datafeed with each anomaly detection job. The datafeed contains a query that runs at a defined interval (`frequency`). -If you are concerned about delayed data, you can add a delay (`query_delay') at each interval. +If you are concerned about delayed data, you can add a delay (`query_delay`) at each interval. By default, the datafeed uses the following query: `{"match_all": {"boost": 1}}`. When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had @@ -9953,13 +9805,7 @@ client.ml.putJob({ job_id, analysis_config, data_description }) - **`allow_no_indices` (Optional, boolean)**: If `true`, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the `_all` string or when no indices are specified. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines -whether wildcard expressions match hidden data streams. Supports a list of values. Valid values are: - -* `all`: Match any data stream or index, including hidden ones. -* `closed`: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed. -* `hidden`: Match hidden data streams and hidden indices. Must be combined with `open`, `closed`, or both. -* `none`: Wildcard patterns are not accepted. -* `open`: Match open, non-hidden indices. Also matches any non-hidden data stream. +whether wildcard expressions match hidden data streams. Supports a list of values. - **`ignore_throttled` (Optional, boolean)**: If `true`, concrete, expanded or aliased indices are ignored when frozen. - **`ignore_unavailable` (Optional, boolean)**: If `true`, unavailable indices (missing or closed) are ignored. @@ -10418,13 +10264,7 @@ The maximum value is the value of `index.max_result_window`. - **`allow_no_indices` (Optional, boolean)**: If `true`, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the `_all` string or when no indices are specified. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard patterns can match. If the request can target data streams, this argument determines -whether wildcard expressions match hidden data streams. Supports a list of values. Valid values are: - -* `all`: Match any data stream or index, including hidden ones. -* `closed`: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed. -* `hidden`: Match hidden data streams and hidden indices. Must be combined with `open`, `closed`, or both. -* `none`: Wildcard patterns are not accepted. -* `open`: Match open, non-hidden indices. Also matches any non-hidden data stream. +whether wildcard expressions match hidden data streams. Supports a list of values. - **`ignore_throttled` (Optional, boolean)**: If `true`, concrete, expanded or aliased indices are ignored when frozen. - **`ignore_unavailable` (Optional, boolean)**: If `true`, unavailable indices (missing or closed) are ignored. @@ -15148,6 +14988,7 @@ The reason for this behavior is to prevent overwriting the watch status from a w Acknowledging an action throttles further executions of that action until its `ack.state` is reset to `awaits_successful_execution`. This happens when the condition of the watch is not met (the condition evaluates to false). +To demonstrate how throttling works in practice and how it can be configured for individual actions within a watch, refer to External documentation. [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-watcher-ack-watch) diff --git a/src/api/api/bulk.ts b/src/api/api/bulk.ts index 55da8bdcd..192e5c505 100644 --- a/src/api/api/bulk.ts +++ b/src/api/api/bulk.ts @@ -54,7 +54,7 @@ const acceptedParams: Record (this: That, params: T.BulkRequest, options?: TransportRequestOptionsWithOutMeta): Promise diff --git a/src/api/api/cluster.ts b/src/api/api/cluster.ts index 2ce1d1eca..a587a6c92 100644 --- a/src/api/api/cluster.ts +++ b/src/api/api/cluster.ts @@ -220,7 +220,7 @@ export default class Cluster { } /** - * Explain the shard allocations. Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise. + * Explain the shard allocations. Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise. Refer to the linked documentation for examples of how to troubleshoot allocation issues using this API. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-cluster-allocation-explain | Elasticsearch API documentation} */ async allocationExplain (this: That, params?: T.ClusterAllocationExplainRequest, options?: TransportRequestOptionsWithOutMeta): Promise diff --git a/src/api/api/esql.ts b/src/api/api/esql.ts index 09aa54957..f2dd81d62 100644 --- a/src/api/api/esql.ts +++ b/src/api/api/esql.ts @@ -46,15 +46,14 @@ export default class Esql { 'query', 'tables', 'include_ccs_metadata', - 'wait_for_completion_timeout' + 'wait_for_completion_timeout', + 'keep_alive', + 'keep_on_completion' ], query: [ 'delimiter', 'drop_null_columns', - 'format', - 'keep_alive', - 'keep_on_completion', - 'wait_for_completion_timeout' + 'format' ] }, 'esql.async_query_delete': { diff --git a/src/api/api/indices.ts b/src/api/api/indices.ts index d393b0d32..7d174f272 100644 --- a/src/api/api/indices.ts +++ b/src/api/api/indices.ts @@ -3108,7 +3108,7 @@ export default class Indices { } /** - * Update field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default. **Add multi-fields to an existing field** Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API. **Change supported mapping parameters for an existing field** The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the `ignore_above` parameter. **Change the mapping of an existing field** Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed. If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index. **Rename a field** Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name. + * Update field mappings. Add new fields to an existing data stream or index. You can use the update mapping API to: - Add a new field to an existing index - Update mappings for multiple indices in a single request - Add new properties to an object field - Enable multi-fields for an existing field - Update supported mapping parameters - Change a field's mapping using reindexing - Rename a field using a field alias Learn how to use the update mapping API with practical examples in the [Update mapping API examples](https://www.elastic.co/docs//manage-data/data-store/mapping/update-mappings-examples) guide. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-put-mapping | Elasticsearch API documentation} */ async putMapping (this: That, params: T.IndicesPutMappingRequest, options?: TransportRequestOptionsWithOutMeta): Promise @@ -3165,7 +3165,7 @@ export default class Indices { } /** - * Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default. To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation. To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`. There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example: ``` { "number_of_replicas": 1 } ``` Or you can use an `index` setting object: ``` { "index": { "number_of_replicas": 1 } } ``` Or you can use dot annotation: ``` { "index.number_of_replicas": 1 } ``` Or you can embed any of the aforementioned options in a `settings` object. For example: ``` { "settings": { "index": { "number_of_replicas": 1 } } } ``` NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it. + * Update index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default. To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation. To preserve existing settings from being updated, set the `preserve_existing` parameter to `true`. For performance optimization during bulk indexing, you can disable the refresh interval. Refer to [disable refresh interval](https://www.elastic.co/docs/deploy-manage/production-guidance/optimize-performance/indexing-speed#disable-refresh-interval) for an example. There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example: ``` { "number_of_replicas": 1 } ``` Or you can use an `index` setting object: ``` { "index": { "number_of_replicas": 1 } } ``` Or you can use dot annotation: ``` { "index.number_of_replicas": 1 } ``` Or you can embed any of the aforementioned options in a `settings` object. For example: ``` { "settings": { "index": { "number_of_replicas": 1 } } } ``` NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it. Refer to [updating analyzers on existing indices](https://www.elastic.co/docs/manage-data/data-store/text-analysis/specify-an-analyzer#update-analyzers-on-existing-indices) for step-by-step examples. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-put-settings | Elasticsearch API documentation} */ async putSettings (this: That, params: T.IndicesPutSettingsRequest, options?: TransportRequestOptionsWithOutMeta): Promise diff --git a/src/api/api/inference.ts b/src/api/api/inference.ts index 4d2c76536..d1963f322 100644 --- a/src/api/api/inference.ts +++ b/src/api/api/inference.ts @@ -643,7 +643,7 @@ export default class Inference { } /** - * Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. + * Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: * AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Amazon Bedrock (`completion`, `text_embedding`) * Anthropic (`completion`) * Azure AI Studio (`completion`, `text_embedding`) * Azure OpenAI (`completion`, `text_embedding`) * Cohere (`completion`, `rerank`, `text_embedding`) * Elasticsearch (`rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland) * ELSER (`sparse_embedding`) * Google AI Studio (`completion`, `text_embedding`) * Google Vertex AI (`rerank`, `text_embedding`) * Hugging Face (`text_embedding`) * Mistral (`text_embedding`) * OpenAI (`chat_completion`, `completion`, `text_embedding`) * VoyageAI (`text_embedding`, `rerank`) * Watsonx inference integration (`text_embedding`) * JinaAI (`text_embedding`, `rerank`) * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-put | Elasticsearch API documentation} */ async put (this: That, params: T.InferencePutRequest, options?: TransportRequestOptionsWithOutMeta): Promise @@ -756,7 +756,7 @@ export default class Inference { } /** - * Create an Amazon Bedrock inference endpoint. Creates an inference endpoint to perform an inference task with the `amazonbedrock` service. >info > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys. + * Create an Amazon Bedrock inference endpoint. Create an inference endpoint to perform an inference task with the `amazonbedrock` service. >info > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-put-amazonbedrock | Elasticsearch API documentation} */ async putAmazonbedrock (this: That, params: T.InferencePutAmazonbedrockRequest, options?: TransportRequestOptionsWithOutMeta): Promise diff --git a/src/api/api/ml.ts b/src/api/api/ml.ts index d4ef76e64..a1d13bebe 100644 --- a/src/api/api/ml.ts +++ b/src/api/api/ml.ts @@ -3540,7 +3540,7 @@ export default class Ml { } /** - * Create a datafeed. Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job. You can associate only one datafeed with each anomaly detection job. The datafeed contains a query that runs at a defined interval (`frequency`). If you are concerned about delayed data, you can add a delay (`query_delay') at each interval. By default, the datafeed uses the following query: `{"match_all": {"boost": 1}}`. When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had at the time of creation and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead. You must use Kibana, this API, or the create anomaly detection jobs API to create a datafeed. Do not add a datafeed directly to the `.ml-config` index. Do not give users `write` privileges on the `.ml-config` index. + * Create a datafeed. Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job. You can associate only one datafeed with each anomaly detection job. The datafeed contains a query that runs at a defined interval (`frequency`). If you are concerned about delayed data, you can add a delay (`query_delay`) at each interval. By default, the datafeed uses the following query: `{"match_all": {"boost": 1}}`. When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had at the time of creation and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead. You must use Kibana, this API, or the create anomaly detection jobs API to create a datafeed. Do not add a datafeed directly to the `.ml-config` index. Do not give users `write` privileges on the `.ml-config` index. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-ml-put-datafeed | Elasticsearch API documentation} */ async putDatafeed (this: That, params: T.MlPutDatafeedRequest, options?: TransportRequestOptionsWithOutMeta): Promise @@ -4826,7 +4826,7 @@ export default class Ml { /** * Validate an anomaly detection job. - * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/ | Elasticsearch API documentation} + * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch | Elasticsearch API documentation} */ async validateDetector (this: That, params: T.MlValidateDetectorRequest, options?: TransportRequestOptionsWithOutMeta): Promise async validateDetector (this: That, params: T.MlValidateDetectorRequest, options?: TransportRequestOptionsWithMeta): Promise> diff --git a/src/api/api/monitoring.ts b/src/api/api/monitoring.ts index d6114727e..8974e0c87 100644 --- a/src/api/api/monitoring.ts +++ b/src/api/api/monitoring.ts @@ -53,7 +53,7 @@ export default class Monitoring { /** * Send monitoring data. This API is used by the monitoring features to send monitoring data. - * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/ | Elasticsearch API documentation} + * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch | Elasticsearch API documentation} */ async bulk (this: That, params: T.MonitoringBulkRequest, options?: TransportRequestOptionsWithOutMeta): Promise async bulk (this: That, params: T.MonitoringBulkRequest, options?: TransportRequestOptionsWithMeta): Promise> diff --git a/src/api/api/reindex.ts b/src/api/api/reindex.ts index ccda1c795..02c0075a4 100644 --- a/src/api/api/reindex.ts +++ b/src/api/api/reindex.ts @@ -53,7 +53,7 @@ const acceptedParams: Record`. **Reindex from multiple sources** If you have many sources to reindex it is generally better to reindex them one at a time rather than using a glob pattern to pick up multiple sources. That way you can resume the process if there are any errors by removing the partially completed source and starting over. It also makes parallelizing the process fairly simple: split the list of sources to reindex and run each list in parallel. For example, you can use a bash script like this: ``` for index in i1 i2 i3 i4 i5; do curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty -d'{ "source": { "index": "'$index'" }, "dest": { "index": "'$index'-reindexed" } }' done ``` **Throttling** Set `requests_per_second` to any positive decimal number (`1.4`, `6`, `1000`, for example) to throttle the rate at which reindex issues batches of index operations. Requests are throttled by padding each batch with a wait time. To turn off throttling, set `requests_per_second` to `-1`. The throttling is done by waiting between batches so that the scroll that reindex uses internally can be given a timeout that takes into account the padding. The padding time is the difference between the batch size divided by the `requests_per_second` and the time spent writing. By default the batch size is `1000`, so if `requests_per_second` is set to `500`: ``` target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds ``` Since the batch is issued as a single bulk request, large batch sizes cause Elasticsearch to create many requests and then wait for a while before starting the next set. This is "bursty" instead of "smooth". **Slicing** Reindex supports sliced scroll to parallelize the reindexing process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts. NOTE: Reindexing from remote clusters does not support manual or automatic slicing. You can slice a reindex request manually by providing a slice ID and total number of slices to each request. You can also let reindex automatically parallelize by using sliced scroll to slice on `_id`. The `slices` parameter specifies the number of slices to use. Adding `slices` to the reindex request just automates the manual process, creating sub-requests which means it has some quirks: * You can see these requests in the tasks API. These sub-requests are "child" tasks of the task for the request with slices. * Fetching the status of the task for the request with `slices` only contains the status of completed slices. * These sub-requests are individually addressable for things like cancellation and rethrottling. * Rethrottling the request with `slices` will rethrottle the unfinished sub-request proportionally. * Canceling the request with `slices` will cancel each sub-request. * Due to the nature of `slices`, each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. * Parameters like `requests_per_second` and `max_docs` on a request with `slices` are distributed proportionally to each sub-request. Combine that with the previous point about distribution being uneven and you should conclude that using `max_docs` with `slices` might not result in exactly `max_docs` documents being reindexed. * Each sub-request gets a slightly different snapshot of the source, though these are all taken at approximately the same time. If slicing automatically, setting `slices` to `auto` will choose a reasonable number for most indices. If slicing manually or otherwise tuning automatic slicing, use the following guidelines. Query performance is most efficient when the number of slices is equal to the number of shards in the index. If that number is large (for example, `500`), choose a lower number as too many slices will hurt performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead. Indexing performance scales linearly across available resources with the number of slices. Whether query or indexing performance dominates the runtime depends on the documents being reindexed and cluster resources. **Modify documents during reindexing** Like `_update_by_query`, reindex operations support a script that modifies the document. Unlike `_update_by_query`, the script is allowed to modify the document's metadata. Just as in `_update_by_query`, you can set `ctx.op` to change the operation that is run on the destination. For example, set `ctx.op` to `noop` if your script decides that the document doesn’t have to be indexed in the destination. This "no operation" will be reported in the `noop` counter in the response body. Set `ctx.op` to `delete` if your script decides that the document must be deleted from the destination. The deletion will be reported in the `deleted` counter in the response body. Setting `ctx.op` to anything else will return an error, as will setting any other field in `ctx`. Think of the possibilities! Just be careful; you are able to change: * `_id` * `_index` * `_version` * `_routing` Setting `_version` to `null` or clearing it from the `ctx` map is just like not sending the version in an indexing request. It will cause the document to be overwritten in the destination regardless of the version on the target or the version type you use in the reindex API. **Reindex from remote** Reindex supports reindexing from a remote Elasticsearch cluster. The `host` parameter must contain a scheme, host, port, and optional path. The `username` and `password` parameters are optional and when they are present the reindex operation will connect to the remote Elasticsearch node using basic authentication. Be sure to use HTTPS when using basic authentication or the password will be sent in plain text. There are a range of settings available to configure the behavior of the HTTPS connection. When using Elastic Cloud, it is also possible to authenticate against the remote cluster through the use of a valid API key. Remote hosts must be explicitly allowed with the `reindex.remote.whitelist` setting. It can be set to a comma delimited list of allowed remote host and port combinations. Scheme is ignored; only the host and port are used. For example: ``` reindex.remote.whitelist: [otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"] ``` The list of allowed hosts must be configured on any nodes that will coordinate the reindex. This feature should work with remote clusters of any version of Elasticsearch. This should enable you to upgrade from any version of Elasticsearch to the current version by reindexing from a cluster of the old version. WARNING: Elasticsearch does not support forward compatibility across major versions. For example, you cannot reindex from a 7.x cluster into a 6.x cluster. To enable queries sent to older versions of Elasticsearch, the `query` parameter is sent directly to the remote host without validation or modification. NOTE: Reindexing from remote clusters does not support manual or automatic slicing. Reindexing from a remote server uses an on-heap buffer that defaults to a maximum size of 100mb. If the remote index includes very large documents you'll need to use a smaller batch size. It is also possible to set the socket read timeout on the remote connection with the `socket_timeout` field and the connection timeout with the `connect_timeout` field. Both default to 30 seconds. **Configuring SSL parameters** Reindex from remote supports configurable SSL settings. These must be specified in the `elasticsearch.yml` file, with the exception of the secure settings, which you add in the Elasticsearch keystore. It is not possible to configure SSL in the body of the reindex request. + * Reindex documents. Copy documents from a source to a destination. You can copy all documents to the destination index or reindex a subset of the documents. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself. IMPORTANT: Reindex requires `_source` to be enabled for all documents in the source. The destination should be configured as wanted before calling the reindex API. Reindex does not copy the settings from the source or its associated template. Mappings, shard counts, and replicas, for example, must be configured ahead of time. If the Elasticsearch security features are enabled, you must have the following security privileges: * The `read` index privilege for the source data stream, index, or alias. * The `write` index privilege for the destination data stream, index, or index alias. * To automatically create a data stream or index with a reindex API request, you must have the `auto_configure`, `create_index`, or `manage` index privilege for the destination data stream, index, or alias. * If reindexing from a remote cluster, the `source.remote.user` must have the `monitor` cluster privilege and the `read` index privilege for the source data stream, index, or alias. If reindexing from a remote cluster, you must explicitly allow the remote host in the `reindex.remote.whitelist` setting. Automatic data stream creation requires a matching index template with data stream enabled. The `dest` element can be configured like the index API to control optimistic concurrency control. Omitting `version_type` or setting it to `internal` causes Elasticsearch to blindly dump documents into the destination, overwriting any that happen to have the same ID. Setting `version_type` to `external` causes Elasticsearch to preserve the `version` from the source, create any documents that are missing, and update any documents that have an older version in the destination than they do in the source. Setting `op_type` to `create` causes the reindex API to create only missing documents in the destination. All existing documents will cause a version conflict. IMPORTANT: Because data streams are append-only, any reindex request to a destination data stream must have an `op_type` of `create`. A reindex can only add new documents to a destination data stream. It cannot update existing documents in a destination data stream. By default, version conflicts abort the reindex process. To continue reindexing if there are conflicts, set the `conflicts` request body property to `proceed`. In this case, the response includes a count of the version conflicts that were encountered. Note that the handling of other error types is unaffected by the `conflicts` property. Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than `max_docs` until it has successfully indexed `max_docs` documents into the target or it has gone through every document in the source query. Refer to the linked documentation for examples of how to reindex documents. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-reindex | Elasticsearch API documentation} */ export default async function ReindexApi (this: That, params: T.ReindexRequest, options?: TransportRequestOptionsWithOutMeta): Promise diff --git a/src/api/api/termvectors.ts b/src/api/api/termvectors.ts index 0e3205a86..68ac7d3cf 100644 --- a/src/api/api/termvectors.ts +++ b/src/api/api/termvectors.ts @@ -65,7 +65,7 @@ const acceptedParams: Record warn > Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16. **Behaviour** The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use `routing` only to hit a particular shard. + * Get term vector information. Get information and statistics about terms in the fields of a particular document. You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the `fields` parameter or by adding the fields to the request body. For example: ``` GET /my-index-000001/_termvectors/1?fields=message ``` Fields can be specified using wildcards, similar to the multi match query. Term vectors are real-time by default, not near real-time. This can be changed by setting `realtime` parameter to `false`. You can request three types of values: _term information_, _term statistics_, and _field statistics_. By default, all term information and field statistics are returned for all fields but term statistics are excluded. **Term information** * term frequency in the field (always returned) * term positions (`positions: true`) * start and end offsets (`offsets: true`) * term payloads (`payloads: true`), as base64 encoded bytes If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user. > warn > Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16. **Behaviour** The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use `routing` only to hit a particular shard. Refer to the linked documentation for detailed examples of how to use this API. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-termvectors | Elasticsearch API documentation} */ export default async function TermvectorsApi (this: That, params: T.TermvectorsRequest, options?: TransportRequestOptionsWithOutMeta): Promise diff --git a/src/api/api/watcher.ts b/src/api/api/watcher.ts index a4278f78f..3047475a7 100644 --- a/src/api/api/watcher.ts +++ b/src/api/api/watcher.ts @@ -166,7 +166,7 @@ export default class Watcher { } /** - * Acknowledge a watch. Acknowledging a watch enables you to manually throttle the execution of the watch's actions. The acknowledgement state of an action is stored in the `status.actions..ack.state` structure. IMPORTANT: If the specified watch is currently being executed, this API will return an error The reason for this behavior is to prevent overwriting the watch status from a watch execution. Acknowledging an action throttles further executions of that action until its `ack.state` is reset to `awaits_successful_execution`. This happens when the condition of the watch is not met (the condition evaluates to false). + * Acknowledge a watch. Acknowledging a watch enables you to manually throttle the execution of the watch's actions. The acknowledgement state of an action is stored in the `status.actions..ack.state` structure. IMPORTANT: If the specified watch is currently being executed, this API will return an error The reason for this behavior is to prevent overwriting the watch status from a watch execution. Acknowledging an action throttles further executions of that action until its `ack.state` is reset to `awaits_successful_execution`. This happens when the condition of the watch is not met (the condition evaluates to false). To demonstrate how throttling works in practice and how it can be configured for individual actions within a watch, refer to External documentation. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-watcher-ack-watch | Elasticsearch API documentation} */ async ackWatch (this: That, params: T.WatcherAckWatchRequest, options?: TransportRequestOptionsWithOutMeta): Promise diff --git a/src/api/types.ts b/src/api/types.ts index 7ecf44d42..689de1986 100644 --- a/src/api/types.ts +++ b/src/api/types.ts @@ -1510,7 +1510,7 @@ export interface OpenPointInTimeRequest extends RequestBase { routing?: Routing /** The type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * It supports comma-separated values, such as `open,hidden`. Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * It supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** Indicates whether the point in time tolerates unavailable shards or shard failures when initially creating the PIT. * If `false`, creating a point in time request when a shard is missing or unavailable will throw an exception. @@ -3195,8 +3195,7 @@ export interface SearchShardsRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -3256,8 +3255,7 @@ export interface SearchTemplateRequest extends RequestBase { ccs_minimize_roundtrips?: boolean /** The type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `true`, specified concrete, expanded, or aliased indices are not included in the response when throttled. */ ignore_throttled?: boolean @@ -3554,8 +3552,7 @@ export interface UpdateByQueryRequest extends RequestBase { df?: string /** The type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * It supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * It supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** Skips the specified number of documents. */ from?: long @@ -15704,8 +15701,11 @@ export interface ClusterGetSettingsRequest extends RequestBase { } export interface ClusterGetSettingsResponse { + /** The settings that persist after the cluster restarts. */ persistent: Record + /** The settings that do not persist after the cluster restarts. */ transient: Record + /** The default setting values. */ defaults?: Record } @@ -15919,7 +15919,9 @@ export interface ClusterPutSettingsRequest extends RequestBase { master_timeout?: Duration /** Explicit operation timeout */ timeout?: Duration + /** The settings that persist after the cluster restarts. */ persistent?: Record + /** The settings that do not persist after the cluster restarts. */ transient?: Record /** All values in `body` will be added to the request body. */ body?: string | { [key: string]: any } & { flat_settings?: never, master_timeout?: never, timeout?: never, persistent?: never, transient?: never } @@ -17643,6 +17645,7 @@ export interface EsqlEsqlClusterDetails { indices: string took?: DurationValue _shards?: EsqlEsqlShardInfo + failures?: EsqlEsqlShardFailure[] } export interface EsqlEsqlClusterInfo { @@ -17679,8 +17682,8 @@ export interface EsqlEsqlResult { } export interface EsqlEsqlShardFailure { - shard: Id - index: IndexName + shard: integer + index: IndexName | null node?: NodeId reason: ErrorCause } @@ -17690,7 +17693,6 @@ export interface EsqlEsqlShardInfo { successful?: integer skipped?: integer failed?: integer - failures?: EsqlEsqlShardFailure[] } export interface EsqlTableValuesContainer { @@ -17717,14 +17719,6 @@ export interface EsqlAsyncQueryRequest extends RequestBase { drop_null_columns?: boolean /** A short version of the Accept header, for example `json` or `yaml`. */ format?: EsqlEsqlFormat - /** The period for which the query and its results are stored in the cluster. - * The default period is five days. - * When this period expires, the query and its results are deleted, even if the query is still ongoing. - * If the `keep_on_completion` parameter is false, Elasticsearch only stores async queries that do not complete within the period set by the `wait_for_completion_timeout` parameter, regardless of this value. */ - keep_alive?: Duration - /** Indicates whether the query and its results are stored in the cluster. - * If false, the query and its results are stored in the cluster only if the request does not complete during the period set by the `wait_for_completion_timeout` parameter. */ - keep_on_completion?: boolean /** By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results. */ columnar?: boolean /** Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on. */ @@ -17751,10 +17745,18 @@ export interface EsqlAsyncQueryRequest extends RequestBase { * If the query completes during this period, results are returned * Otherwise, a query ID is returned that can later be used to retrieve the results. */ wait_for_completion_timeout?: Duration + /** The period for which the query and its results are stored in the cluster. + * The default period is five days. + * When this period expires, the query and its results are deleted, even if the query is still ongoing. + * If the `keep_on_completion` parameter is false, Elasticsearch only stores async queries that do not complete within the period set by the `wait_for_completion_timeout` parameter, regardless of this value. */ + keep_alive?: Duration + /** Indicates whether the query and its results are stored in the cluster. + * If false, the query and its results are stored in the cluster only if the request does not complete during the period set by the `wait_for_completion_timeout` parameter. */ + keep_on_completion?: boolean /** All values in `body` will be added to the request body. */ - body?: string | { [key: string]: any } & { delimiter?: never, drop_null_columns?: never, format?: never, keep_alive?: never, keep_on_completion?: never, columnar?: never, filter?: never, locale?: never, params?: never, profile?: never, query?: never, tables?: never, include_ccs_metadata?: never, wait_for_completion_timeout?: never } + body?: string | { [key: string]: any } & { delimiter?: never, drop_null_columns?: never, format?: never, columnar?: never, filter?: never, locale?: never, params?: never, profile?: never, query?: never, tables?: never, include_ccs_metadata?: never, wait_for_completion_timeout?: never, keep_alive?: never, keep_on_completion?: never } /** All values in `querystring` will be added to the request querystring. */ - querystring?: { [key: string]: any } & { delimiter?: never, drop_null_columns?: never, format?: never, keep_alive?: never, keep_on_completion?: never, columnar?: never, filter?: never, locale?: never, params?: never, profile?: never, query?: never, tables?: never, include_ccs_metadata?: never, wait_for_completion_timeout?: never } + querystring?: { [key: string]: any } & { delimiter?: never, drop_null_columns?: never, format?: never, columnar?: never, filter?: never, locale?: never, params?: never, profile?: never, query?: never, tables?: never, include_ccs_metadata?: never, wait_for_completion_timeout?: never, keep_alive?: never, keep_on_completion?: never } } export type EsqlAsyncQueryResponse = EsqlAsyncEsqlResult @@ -18776,7 +18778,8 @@ export interface IndicesIndexSettingsKeys { max_shingle_diff?: integer blocks?: IndicesIndexSettingBlocks max_refresh_listeners?: integer - /** Settings to define analyzers, tokenizers, token filters and character filters. */ + /** Settings to define analyzers, tokenizers, token filters and character filters. + * Refer to the linked documentation for step-by-step examples of updating analyzers on existing indices. */ analyze?: IndicesSettingsAnalyze highlight?: IndicesSettingsHighlight max_terms_count?: integer @@ -19349,8 +19352,7 @@ export interface IndicesClearCacheRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `true`, clears the fields cache. * Use the `fields` parameter to clear the cache of specific fields only. */ @@ -19418,8 +19420,7 @@ export interface IndicesCloseRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -19592,8 +19593,7 @@ export interface IndicesDeleteRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -19747,8 +19747,7 @@ export interface IndicesExistsRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `true`, returns settings in flat format. */ flat_settings?: boolean @@ -19777,8 +19776,7 @@ export interface IndicesExistsAliasRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, requests that include a missing data stream or index in the target indices or data streams return an error. */ ignore_unavailable?: boolean @@ -19935,8 +19933,7 @@ export interface IndicesFlushRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `true`, the request forces a flush even if there are no changes to commit to the index. */ force?: boolean @@ -20038,8 +20035,7 @@ export interface IndicesGetAliasRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -20065,8 +20061,7 @@ export interface IndicesGetDataLifecycleRequest extends RequestBase { * To target all data streams, omit this parameter or use `*` or `_all`. */ name: DataStreamNames /** Type of data stream that wildcard patterns can match. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `true`, return all default settings in the response. */ include_defaults?: boolean @@ -20146,8 +20141,7 @@ export interface IndicesGetFieldMappingRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -20208,8 +20202,7 @@ export interface IndicesGetMappingRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -20397,8 +20390,7 @@ export interface IndicesOpenRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -20482,8 +20474,7 @@ export interface IndicesPutDataLifecycleRequest extends RequestBase { * To target all data streams use `*` or `_all`. */ name: DataStreamNames /** Type of data stream that wildcard patterns can match. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `hidden`, `open`, `closed`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** Period to wait for a connection to the master node. If no response is * received before the timeout expires, the request fails and returns an @@ -20587,8 +20578,7 @@ export interface IndicesPutMappingRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -20843,8 +20833,7 @@ export interface IndicesRefreshRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -20902,7 +20891,6 @@ export interface IndicesResolveClusterRequest extends RequestBase { /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. * NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index * options to the `_resolve/cluster` API endpoint that takes no index expression. */ expand_wildcards?: ExpandWildcards @@ -20952,8 +20940,7 @@ export interface IndicesResolveIndexRequest extends RequestBase { name: Names /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -21075,8 +21062,7 @@ export interface IndicesSegmentsRequest extends RequestBase { allow_no_indices?: boolean /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `false`, the request returns an error if it targets a missing or closed index. */ ignore_unavailable?: boolean @@ -21630,8 +21616,7 @@ export interface IndicesValidateQueryRequest extends RequestBase { df?: string /** Type of index that wildcard patterns can match. * If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. - * Supports comma-separated values, such as `open,hidden`. - * Valid values are: `all`, `open`, `closed`, `hidden`, `none`. */ + * Supports comma-separated values, such as `open,hidden`. */ expand_wildcards?: ExpandWildcards /** If `true`, the response returns detailed information if an error has occurred. */ explain?: boolean @@ -22349,13 +22334,47 @@ export type InferenceJinaAITaskType = 'rerank' | 'text_embedding' export type InferenceJinaAITextEmbeddingTask = 'classification' | 'clustering' | 'ingest' | 'search' export interface InferenceMessage { - /** The content of the message. */ + /** The content of the message. + * + * String example: + * ``` + * { + * "content": "Some string" + * } + * ``` + * + * Object example: + * ``` + * { + * "content": [ + * { + * "text": "Some text", + * "type": "text" + * } + * ] + * } + * ``` */ content?: InferenceMessageContent - /** The role of the message author. */ + /** The role of the message author. Valid values are `user`, `assistant`, `system`, and `tool`. */ role: string - /** The tool call that this message is responding to. */ + /** Only for `tool` role messages. The tool call that this message is responding to. */ tool_call_id?: Id - /** The tool calls generated by the model. */ + /** Only for `assistant` role messages. The tool calls generated by the model. If it's specified, the `content` field is optional. + * Example: + * ``` + * { + * "tool_calls": [ + * { + * "id": "call_KcAjWtAww20AihPHphUh46Gd", + * "type": "function", + * "function": { + * "name": "get_current_weather", + * "arguments": "{\"location\":\"Boston, MA\"}" + * } + * } + * ] + * } + * ``` */ tool_calls?: InferenceToolCall[] } @@ -22430,7 +22449,25 @@ export interface InferenceRankedDocument { } export interface InferenceRateLimitSetting { - /** The number of requests allowed per minute. */ + /** The number of requests allowed per minute. + * By default, the number of requests allowed per minute is set by each service as follows: + * + * * `alibabacloud-ai-search` service: `1000` + * * `anthropic` service: `50` + * * `azureaistudio` service: `240` + * * `azureopenai` service and task type `text_embedding`: `1440` + * * `azureopenai` service and task type `completion`: `120` + * * `cohere` service: `10000` + * * `elastic` service and task type `chat_completion`: `240` + * * `googleaistudio` service: `360` + * * `googlevertexai` service: `30000` + * * `hugging_face` service: `3000` + * * `jinaai` service: `2000` + * * `mistral` service: `240` + * * `openai` service and task type `text_embedding`: `3000` + * * `openai` service and task type `completion`: `500` + * * `voyageai` service: `2000` + * * `watsonxai` service: `120` */ requests_per_minute?: integer } @@ -22447,9 +22484,46 @@ export interface InferenceRequestChatCompletion { stop?: string[] /** The sampling temperature to use. */ temperature?: float - /** Controls which tool is called by the model. */ + /** Controls which tool is called by the model. + * String representation: One of `auto`, `none`, or `requrired`. `auto` allows the model to choose between calling tools and generating a message. `none` causes the model to not call any tools. `required` forces the model to call one or more tools. + * Example (object representation): + * ``` + * { + * "tool_choice": { + * "type": "function", + * "function": { + * "name": "get_current_weather" + * } + * } + * } + * ``` */ tool_choice?: InferenceCompletionToolType - /** A list of tools that the model can call. */ + /** A list of tools that the model can call. + * Example: + * ``` + * { + * "tools": [ + * { + * "type": "function", + * "function": { + * "name": "get_price_of_item", + * "description": "Get the current price of an item", + * "parameters": { + * "type": "object", + * "properties": { + * "item": { + * "id": "12345" + * }, + * "unit": { + * "type": "currency" + * } + * } + * } + * } + * } + * ] + * } + * ``` */ tools?: InferenceCompletionTool[] /** Nucleus sampling, an alternative to sampling with temperature. */ top_p?: float @@ -22698,7 +22772,7 @@ export interface InferenceInferenceRequest extends RequestBase { export type InferenceInferenceResponse = InferenceInferenceResult export interface InferencePutRequest extends RequestBase { - /** The task type */ + /** The task type. Refer to the integration list in the API description for the available task types. */ task_type?: InferenceTaskType /** The inference Id */ inference_id: Id @@ -28417,13 +28491,7 @@ export interface MlPutJobRequest extends RequestBase { * `_all` string or when no indices are specified. */ allow_no_indices?: boolean /** Type of index that wildcard patterns can match. If the request can target data streams, this argument determines - * whether wildcard expressions match hidden data streams. Supports comma-separated values. Valid values are: - * - * * `all`: Match any data stream or index, including hidden ones. - * * `closed`: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed. - * * `hidden`: Match hidden data streams and hidden indices. Must be combined with `open`, `closed`, or both. - * * `none`: Wildcard patterns are not accepted. - * * `open`: Match open, non-hidden indices. Also matches any non-hidden data stream. */ + * whether wildcard expressions match hidden data streams. Supports comma-separated values. */ expand_wildcards?: ExpandWildcards /** If `true`, concrete, expanded or aliased indices are ignored when frozen. */ ignore_throttled?: boolean @@ -28954,13 +29022,7 @@ export interface MlUpdateDatafeedRequest extends RequestBase { * `_all` string or when no indices are specified. */ allow_no_indices?: boolean /** Type of index that wildcard patterns can match. If the request can target data streams, this argument determines - * whether wildcard expressions match hidden data streams. Supports comma-separated values. Valid values are: - * - * * `all`: Match any data stream or index, including hidden ones. - * * `closed`: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed. - * * `hidden`: Match hidden data streams and hidden indices. Must be combined with `open`, `closed`, or both. - * * `none`: Wildcard patterns are not accepted. - * * `open`: Match open, non-hidden indices. Also matches any non-hidden data stream. */ + * whether wildcard expressions match hidden data streams. Supports comma-separated values. */ expand_wildcards?: ExpandWildcards /** If `true`, concrete, expanded or aliased indices are ignored when frozen. */ ignore_throttled?: boolean @@ -33835,6 +33897,14 @@ export interface SlmSnapshotLifecycle { stats: SlmStatistics } +export interface SlmSnapshotPolicyStats { + policy: string + snapshots_taken: long + snapshots_failed: long + snapshots_deleted: long + snapshot_deletion_failures: long +} + export interface SlmStatistics { retention_deletion_time?: Duration retention_deletion_time_millis?: DurationValue @@ -33945,7 +34015,7 @@ export interface SlmGetStatsResponse { total_snapshot_deletion_failures: long total_snapshots_failed: long total_snapshots_taken: long - policy_stats: string[] + policy_stats: SlmSnapshotPolicyStats[] } export interface SlmGetStatusRequest extends RequestBase {