diff --git a/docs/reference/api-reference.md b/docs/reference/api-reference.md index 0b3b01cfc..7cf7fac38 100644 --- a/docs/reference/api-reference.md +++ b/docs/reference/api-reference.md @@ -1506,7 +1506,7 @@ client.search({ ... }) #### Request (object) [_request_search] - **`index` (Optional, string \| string[])**: A list of data streams, indices, and aliases to search. It supports wildcards (`*`). To search all data streams and indices, omit this parameter or use `*` or `_all`. -- **`aggregations` (Optional, Record)**: Defines the aggregations that are run as part of the search request. +- **`aggregations` (Optional, Record)**: Defines the aggregations that are run as part of the search request. - **`collapse` (Optional, { field, inner_hits, max_concurrent_group_searches, collapse })**: Collapses search results the values of the specified field. - **`explain` (Optional, boolean)**: If `true`, the request returns detailed information about score computation as part of a hit. - **`ext` (Optional, Record)**: Configuration of search extensions defined by Elasticsearch plugins. @@ -1678,7 +1678,7 @@ client.searchMvt({ index, field, zoom, x, y }) - **`zoom` (number)**: Zoom level for the vector tile to search - **`x` (number)**: X coordinate for the vector tile to search - **`y` (number)**: Y coordinate for the vector tile to search -- **`aggs` (Optional, Record)**: Sub-aggregations for the geotile_grid. It supports the following aggregation types: - `avg` - `boxplot` - `cardinality` - `extended stats` - `max` - `median absolute deviation` - `min` - `percentile` - `percentile-rank` - `stats` - `sum` - `value count` The aggregation names can't start with `_mvt_`. The `_mvt_` prefix is reserved for internal aggregations. +- **`aggs` (Optional, Record)**: Sub-aggregations for the geotile_grid. It supports the following aggregation types: - `avg` - `boxplot` - `cardinality` - `extended stats` - `max` - `median absolute deviation` - `min` - `percentile` - `percentile-rank` - `stats` - `sum` - `value count` The aggregation names can't start with `_mvt_`. The `_mvt_` prefix is reserved for internal aggregations. - **`buffer` (Optional, number)**: The size, in pixels, of a clipping buffer outside the tile. This allows renderers to avoid outline artifacts from geometries that extend past the extent of the tile. - **`exact_bounds` (Optional, boolean)**: If `false`, the meta layer's feature is the bounding box of the tile. If `true`, the meta layer's feature is a bounding box resulting from a `geo_bounds` aggregation. The aggregation runs on values that intersect the `//` tile with `wrap_longitude` set to `false`. The resulting bounding box may be larger than the vector tile. - **`extent` (Optional, number)**: The size, in pixels, of a side of the tile. Vector tiles are square with equal sides. @@ -2146,7 +2146,7 @@ client.asyncSearch.submit({ ... }) #### Request (object) [_request_async_search.submit] - **`index` (Optional, string \| string[])**: A list of index names to search; use `_all` or empty string to perform the operation on all indices -- **`aggregations` (Optional, Record)** +- **`aggregations` (Optional, Record)** - **`collapse` (Optional, { field, inner_hits, max_concurrent_group_searches, collapse })** - **`explain` (Optional, boolean)**: If true, returns detailed information about score computation as part of a hit. - **`ext` (Optional, Record)**: Configuration of search extensions defined by Elasticsearch plugins. @@ -2511,7 +2511,8 @@ client.cat.mlDataFrameAnalytics({ ... }) #### Request (object) [_request_cat.ml_data_frame_analytics] - **`id` (Optional, string)**: The ID of the data frame analytics to fetch -- **`allow_no_match` (Optional, boolean)**: Whether to ignore if a wildcard expression matches no configs. (This includes `_all` string or when no configs have been specified) +- **`allow_no_match` (Optional, boolean)**: Whether to ignore if a wildcard expression matches no configs. +(This includes `_all` string or when no configs have been specified.) - **`h` (Optional, Enum("assignment_explanation" \| "create_time" \| "description" \| "dest_index" \| "failure_reason" \| "id" \| "model_memory_limit" \| "node.address" \| "node.ephemeral_id" \| "node.id" \| "node.name" \| "progress" \| "source_index" \| "state" \| "type" \| "version") \| Enum("assignment_explanation" \| "create_time" \| "description" \| "dest_index" \| "failure_reason" \| "id" \| "model_memory_limit" \| "node.address" \| "node.ephemeral_id" \| "node.id" \| "node.name" \| "progress" \| "source_index" \| "state" \| "type" \| "version")[])**: List of column names to display. - **`s` (Optional, Enum("assignment_explanation" \| "create_time" \| "description" \| "dest_index" \| "failure_reason" \| "id" \| "model_memory_limit" \| "node.address" \| "node.ephemeral_id" \| "node.id" \| "node.name" \| "progress" \| "source_index" \| "state" \| "type" \| "version") \| Enum("assignment_explanation" \| "create_time" \| "description" \| "dest_index" \| "failure_reason" \| "id" \| "model_memory_limit" \| "node.address" \| "node.ephemeral_id" \| "node.id" \| "node.name" \| "progress" \| "source_index" \| "state" \| "type" \| "version")[])**: List of column names or column aliases used to sort the response. @@ -3762,7 +3763,7 @@ client.connector.delete({ connector_id }) #### Request (object) [_request_connector.delete] - **`connector_id` (string)**: The unique identifier of the connector to be deleted -- **`delete_sync_jobs` (Optional, boolean)**: A flag indicating if associated sync jobs should be also removed. Defaults to false. +- **`delete_sync_jobs` (Optional, boolean)**: A flag indicating if associated sync jobs should be also removed. - **`hard` (Optional, boolean)**: A flag indicating if the connector should be hard deleted. ## client.connector.get [_connector.get] @@ -3796,7 +3797,7 @@ client.connector.list({ ... }) ### Arguments [_arguments_connector.list] #### Request (object) [_request_connector.list] -- **`from` (Optional, number)**: Starting offset (default: 0) +- **`from` (Optional, number)**: Starting offset - **`size` (Optional, number)**: Specifies a max number of results to get - **`index_name` (Optional, string \| string[])**: A list of connector index names to fetch connector documents for - **`connector_name` (Optional, string \| string[])**: A list of connector names to fetch connector documents for @@ -3971,7 +3972,7 @@ client.connector.syncJobList({ ... }) ### Arguments [_arguments_connector.sync_job_list] #### Request (object) [_request_connector.sync_job_list] -- **`from` (Optional, number)**: Starting offset (default: 0) +- **`from` (Optional, number)**: Starting offset - **`size` (Optional, number)**: Specifies a max number of results to get - **`status` (Optional, Enum("canceling" \| "canceled" \| "completed" \| "error" \| "in_progress" \| "pending" \| "suspended"))**: A sync job status to fetch connector sync jobs for - **`connector_id` (Optional, string)**: A connector id to fetch connector sync jobs for @@ -4830,7 +4831,7 @@ client.fleet.search({ index }) #### Request (object) [_request_fleet.search] - **`index` (string \| string)**: A single target to search. If the target is an index alias, it must resolve to a single index. -- **`aggregations` (Optional, Record)** +- **`aggregations` (Optional, Record)** - **`collapse` (Optional, { field, inner_hits, max_concurrent_group_searches, collapse })** - **`explain` (Optional, boolean)**: If true, returns detailed information about score computation as part of a hit. - **`ext` (Optional, Record)**: Configuration of search extensions defined by Elasticsearch plugins. @@ -6219,6 +6220,26 @@ client.indices.getMigrateReindexStatus({ index }) #### Request (object) [_request_indices.get_migrate_reindex_status] - **`index` (string \| string[])**: The index or data stream name. +## client.indices.getSample [_indices.get_sample] +Get random sample of ingested data + +[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-sample) + +```ts +client.indices.getSample() +``` + + +## client.indices.getSampleStats [_indices.get_sample_stats] +Get stats about a random sample of ingested data + +[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-sample) + +```ts +client.indices.getSampleStats() +``` + + ## client.indices.getSettings [_indices.get_settings] Get index settings. Get setting information for one or more indices. @@ -7437,7 +7458,7 @@ such as `open,hidden`. - **`groups` (Optional, string \| string[])**: List of search groups to include in the search statistics. - **`include_segment_file_sizes` (Optional, boolean)**: If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested). - **`include_unloaded_segments` (Optional, boolean)**: If true, the response includes information from segments that are not loaded into memory. -- **`level` (Optional, Enum("cluster" \| "indices" \| "shards"))**: Indicates whether statistics are aggregated at the cluster, index, or shard level. +- **`level` (Optional, Enum("cluster" \| "indices" \| "shards"))**: Indicates whether statistics are aggregated at the cluster, indices, or shards level. ## client.indices.updateAliases [_indices.update_aliases] Create or update an alias. @@ -10103,7 +10124,7 @@ client.ml.putDatafeed({ datafeed_id }) - **`datafeed_id` (string)**: A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -- **`aggregations` (Optional, Record)**: If set, the datafeed performs aggregation searches. +- **`aggregations` (Optional, Record)**: If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data. - **`chunking_config` (Optional, { mode, time_span })**: Datafeeds might be required to search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. @@ -10617,7 +10638,7 @@ client.ml.updateDatafeed({ datafeed_id }) - **`datafeed_id` (string)**: A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -- **`aggregations` (Optional, Record)**: If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only +- **`aggregations` (Optional, Record)**: If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data. - **`chunking_config` (Optional, { mode, time_span })**: Datafeeds might search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of @@ -10940,7 +10961,7 @@ client.nodes.stats({ ... }) - **`fields` (Optional, string \| string[])**: List or wildcard expressions of fields to include in the statistics. - **`groups` (Optional, boolean)**: List of search groups to include in the search statistics. - **`include_segment_file_sizes` (Optional, boolean)**: If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested). -- **`level` (Optional, Enum("node" \| "indices" \| "shards"))**: Indicates whether statistics are aggregated at the cluster, index, or shard level. +- **`level` (Optional, Enum("node" \| "indices" \| "shards"))**: Indicates whether statistics are aggregated at the node, indices, or shards level. - **`timeout` (Optional, string \| -1 \| 0)**: Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. - **`types` (Optional, string[])**: A list of document types for the indexing index metric. - **`include_unloaded_segments` (Optional, boolean)**: If `true`, the response includes information from segments that are not loaded into memory. @@ -10963,13 +10984,6 @@ A list of the following options: `_all`, `rest_actions`. - **`timeout` (Optional, string \| -1 \| 0)**: Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -## client.project.tags [_project.tags] -Return tags defined for the project -```ts -client.project.tags() -``` - - ## client.queryRules.deleteRule [_query_rules.delete_rule] Delete a query rule. Delete a query rule within a query ruleset. @@ -11292,7 +11306,7 @@ This parameter has the following rules: * Multiple non-rollup indices may be specified. * Only one rollup index may be specified. If more than one are supplied, an exception occurs. * Wildcard expressions (`*`) may be used. If they match more than one rollup index, an exception occurs. However, you can use an expression to match multiple non-rollup indices or data streams. -- **`aggregations` (Optional, Record)**: Specifies aggregations. +- **`aggregations` (Optional, Record)**: Specifies aggregations. - **`query` (Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type })**: Specifies a DSL query that is subject to some limitations. - **`size` (Optional, number)**: Must be zero if set, as rollups work on pre-aggregated data. - **`rest_total_hits_as_int` (Optional, boolean)**: Indicates whether hits.total should be rendered as an integer or an object in the rest search response @@ -11923,6 +11937,9 @@ By default, API keys never expire. - **`metadata` (Optional, Record)**: Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with `_` are reserved for system usage. +- **`certificate_identity` (Optional, string)**: The certificate identity to associate with this API key. +This field is used to restrict the API key to connections authenticated by a specific TLS certificate. +The value should match the certificate's distinguished name (DN) pattern. ## client.security.createServiceToken [_security.create_service_token] Create a service account token. @@ -13292,6 +13309,12 @@ By default, API keys never expire. This property can be omitted to leave the val It supports nested data structure. Within the metadata object, keys beginning with `_` are reserved for system usage. When specified, this information fully replaces metadata previously associated with the API key. +- **`certificate_identity` (Optional, string)**: The certificate identity to associate with this API key. +This field is used to restrict the API key to connections authenticated by a specific TLS certificate. +The value should match the certificate's distinguished name (DN) pattern. +When specified, this fully replaces any previously assigned certificate identity. +To clear an existing certificate identity, explicitly set this field to `null`. +When omitted, the existing certificate identity remains unchanged. ## client.security.updateSettings [_security.update_settings] Update security index settings. @@ -15140,7 +15163,7 @@ indexing. The minimum value is 1s and the maximum is 1h. These objects define the group by fields and the aggregation to reduce the data. - **`source` (Optional, { index, query, remote, size, slice, sort, _source, runtime_mappings })**: The source of the data for the transform. -- **`settings` (Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended })**: Defines optional transform settings. +- **`settings` (Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, use_point_in_time, unattended })**: Defines optional transform settings. - **`sync` (Optional, { time })**: Defines the properties transforms require to run continuously. - **`retention_policy` (Optional, { time })**: Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. @@ -15196,7 +15219,7 @@ The minimum value is `1s` and the maximum is `1h`. and the aggregation to reduce the data. - **`retention_policy` (Optional, { time })**: Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. -- **`settings` (Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended })**: Defines optional transform settings. +- **`settings` (Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, use_point_in_time, unattended })**: Defines optional transform settings. - **`sync` (Optional, { time })**: Defines the properties transforms require to run continuously. - **`defer_validation` (Optional, boolean)**: When the transform is created, a series of validations occur to ensure its success. For example, there is a check for the existence of the source indices and a check that the destination index is not part of the source @@ -15367,7 +15390,7 @@ the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. - **`_meta` (Optional, Record)**: Defines optional transform metadata. - **`source` (Optional, { index, query, remote, size, slice, sort, _source, runtime_mappings })**: The source of the data for the transform. -- **`settings` (Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended })**: Defines optional transform settings. +- **`settings` (Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, use_point_in_time, unattended })**: Defines optional transform settings. - **`sync` (Optional, { time })**: Defines the properties transforms require to run continuously. - **`retention_policy` (Optional, { time } \| null)**: Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. diff --git a/src/api/api/indices.ts b/src/api/api/indices.ts index 849d9b4a1..14b50054e 100644 --- a/src/api/api/indices.ts +++ b/src/api/api/indices.ts @@ -493,6 +493,20 @@ export default class Indices { body: [], query: [] }, + 'indices.get_sample': { + path: [ + 'index' + ], + body: [], + query: [] + }, + 'indices.get_sample_stats': { + path: [ + 'index' + ], + body: [], + query: [] + }, 'indices.get_settings': { path: [ 'index', @@ -3124,6 +3138,102 @@ export default class Indices { return await this.transport.request({ path, method, querystring, body, meta }, options) } + /** + * Get random sample of ingested data + * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-sample | Elasticsearch API documentation} + */ + async getSample (this: That, params?: T.TODO, options?: TransportRequestOptionsWithOutMeta): Promise + async getSample (this: That, params?: T.TODO, options?: TransportRequestOptionsWithMeta): Promise> + async getSample (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise + async getSample (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise { + const { + path: acceptedPath + } = this[kAcceptedParams]['indices.get_sample'] + + const userQuery = params?.querystring + const querystring: Record = userQuery != null ? { ...userQuery } : {} + + let body: Record | string | undefined + const userBody = params?.body + if (userBody != null) { + if (typeof userBody === 'string') { + body = userBody + } else { + body = { ...userBody } + } + } + + params = params ?? {} + for (const key in params) { + if (acceptedPath.includes(key)) { + continue + } else if (key !== 'body' && key !== 'querystring') { + querystring[key] = params[key] + } + } + + const method = 'GET' + const path = `/${encodeURIComponent(params.index.toString())}/_sample` + const meta: TransportRequestMetadata = { + name: 'indices.get_sample', + pathParts: { + index: params.index + }, + acceptedParams: [ + 'index' + ] + } + return await this.transport.request({ path, method, querystring, body, meta }, options) + } + + /** + * Get stats about a random sample of ingested data + * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-sample | Elasticsearch API documentation} + */ + async getSampleStats (this: That, params?: T.TODO, options?: TransportRequestOptionsWithOutMeta): Promise + async getSampleStats (this: That, params?: T.TODO, options?: TransportRequestOptionsWithMeta): Promise> + async getSampleStats (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise + async getSampleStats (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise { + const { + path: acceptedPath + } = this[kAcceptedParams]['indices.get_sample_stats'] + + const userQuery = params?.querystring + const querystring: Record = userQuery != null ? { ...userQuery } : {} + + let body: Record | string | undefined + const userBody = params?.body + if (userBody != null) { + if (typeof userBody === 'string') { + body = userBody + } else { + body = { ...userBody } + } + } + + params = params ?? {} + for (const key in params) { + if (acceptedPath.includes(key)) { + continue + } else if (key !== 'body' && key !== 'querystring') { + querystring[key] = params[key] + } + } + + const method = 'GET' + const path = `/${encodeURIComponent(params.index.toString())}/_sample/stats` + const meta: TransportRequestMetadata = { + name: 'indices.get_sample_stats', + pathParts: { + index: params.index + }, + acceptedParams: [ + 'index' + ] + } + return await this.transport.request({ path, method, querystring, body, meta }, options) + } + /** * Get index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings | Elasticsearch API documentation} diff --git a/src/api/api/project.ts b/src/api/api/project.ts index e8717a7a4..7818be504 100644 --- a/src/api/api/project.ts +++ b/src/api/api/project.ts @@ -43,7 +43,8 @@ export default class Project { } /** - * Return tags defined for the project + * Get tags. Get the tags that are defined for the project. + * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch-serverless/operation/operation-project-tags | Elasticsearch API documentation} */ async tags (this: That, params?: T.ProjectTagsRequest, options?: TransportRequestOptionsWithOutMeta): Promise async tags (this: That, params?: T.ProjectTagsRequest, options?: TransportRequestOptionsWithMeta): Promise> diff --git a/src/api/api/security.ts b/src/api/api/security.ts index ba10fe760..bd7ee00fa 100644 --- a/src/api/api/security.ts +++ b/src/api/api/security.ts @@ -148,7 +148,8 @@ export default class Security { 'access', 'expiration', 'metadata', - 'name' + 'name', + 'certificate_identity' ], query: [] }, @@ -636,7 +637,8 @@ export default class Security { body: [ 'access', 'expiration', - 'metadata' + 'metadata', + 'certificate_identity' ], query: [] }, @@ -1382,7 +1384,8 @@ export default class Security { 'access', 'expiration', 'metadata', - 'name' + 'name', + 'certificate_identity' ] } return await this.transport.request({ path, method, querystring, body, meta }, options) @@ -4198,7 +4201,8 @@ export default class Security { 'id', 'access', 'expiration', - 'metadata' + 'metadata', + 'certificate_identity' ] } return await this.transport.request({ path, method, querystring, body, meta }, options) diff --git a/src/api/types.ts b/src/api/types.ts index f0b85f962..53eae0a0e 100644 --- a/src/api/types.ts +++ b/src/api/types.ts @@ -1752,10 +1752,13 @@ export interface ReindexRemoteSource { /** The URL for the remote instance of Elasticsearch that you want to index from. * This information is required when you're indexing from remote. */ host: Host - /** The username to use for authentication with the remote host. */ + /** The username to use for authentication with the remote host (required when using basic auth). */ username?: Username - /** The password to use for authentication with the remote host. */ + /** The password to use for authentication with the remote host (required when using basic auth). */ password?: Password + /** The API key to use for authentication with the remote host (as an alternative to basic auth when the remote cluster is in Elastic Cloud). + * (It is not permitted to set this and also to set an `Authorization` header via `headers`.) */ + api_key?: string /** The remote socket read timeout. */ socket_timeout?: Duration } @@ -1873,7 +1876,7 @@ export interface ReindexSource { sort?: Sort /** If `true`, reindex all source fields. * Set it to a list to reindex select fields. */ - _source?: Fields + _source?: SearchSourceConfig runtime_mappings?: MappingRuntimeFields } @@ -3823,6 +3826,11 @@ export type ByteSize = long | string export type Bytes = 'b' | 'kb' | 'mb' | 'gb' | 'tb' | 'pb' +export interface CartesianPoint { + x: double + y: double +} + export type CategoryId = string export interface ChunkRescorer { @@ -4920,6 +4928,11 @@ export type uint = number export type ulong = number +export interface AggregationsAbstractChangePoint { + p_value: double + change_point: integer +} + export interface AggregationsAdjacencyMatrixAggregate extends AggregationsMultiBucketAggregateBase { } @@ -4937,7 +4950,7 @@ export interface AggregationsAdjacencyMatrixBucketKeys extends AggregationsMulti export type AggregationsAdjacencyMatrixBucket = AggregationsAdjacencyMatrixBucketKeys & { [property: string]: AggregationsAggregate | string | long } -export type AggregationsAggregate = AggregationsCardinalityAggregate | AggregationsHdrPercentilesAggregate | AggregationsHdrPercentileRanksAggregate | AggregationsTDigestPercentilesAggregate | AggregationsTDigestPercentileRanksAggregate | AggregationsPercentilesBucketAggregate | AggregationsMedianAbsoluteDeviationAggregate | AggregationsMinAggregate | AggregationsMaxAggregate | AggregationsSumAggregate | AggregationsAvgAggregate | AggregationsWeightedAvgAggregate | AggregationsValueCountAggregate | AggregationsSimpleValueAggregate | AggregationsDerivativeAggregate | AggregationsBucketMetricValueAggregate | AggregationsStatsAggregate | AggregationsStatsBucketAggregate | AggregationsExtendedStatsAggregate | AggregationsExtendedStatsBucketAggregate | AggregationsGeoBoundsAggregate | AggregationsGeoCentroidAggregate | AggregationsHistogramAggregate | AggregationsDateHistogramAggregate | AggregationsAutoDateHistogramAggregate | AggregationsVariableWidthHistogramAggregate | AggregationsStringTermsAggregate | AggregationsLongTermsAggregate | AggregationsDoubleTermsAggregate | AggregationsUnmappedTermsAggregate | AggregationsLongRareTermsAggregate | AggregationsStringRareTermsAggregate | AggregationsUnmappedRareTermsAggregate | AggregationsMultiTermsAggregate | AggregationsMissingAggregate | AggregationsNestedAggregate | AggregationsReverseNestedAggregate | AggregationsGlobalAggregate | AggregationsFilterAggregate | AggregationsChildrenAggregate | AggregationsParentAggregate | AggregationsSamplerAggregate | AggregationsUnmappedSamplerAggregate | AggregationsGeoHashGridAggregate | AggregationsGeoTileGridAggregate | AggregationsGeoHexGridAggregate | AggregationsRangeAggregate | AggregationsDateRangeAggregate | AggregationsGeoDistanceAggregate | AggregationsIpRangeAggregate | AggregationsIpPrefixAggregate | AggregationsFiltersAggregate | AggregationsAdjacencyMatrixAggregate | AggregationsSignificantLongTermsAggregate | AggregationsSignificantStringTermsAggregate | AggregationsUnmappedSignificantTermsAggregate | AggregationsCompositeAggregate | AggregationsFrequentItemSetsAggregate | AggregationsTimeSeriesAggregate | AggregationsScriptedMetricAggregate | AggregationsTopHitsAggregate | AggregationsInferenceAggregate | AggregationsStringStatsAggregate | AggregationsBoxPlotAggregate | AggregationsTopMetricsAggregate | AggregationsTTestAggregate | AggregationsRateAggregate | AggregationsCumulativeCardinalityAggregate | AggregationsMatrixStatsAggregate | AggregationsGeoLineAggregate +export type AggregationsAggregate = AggregationsCardinalityAggregate | AggregationsHdrPercentilesAggregate | AggregationsHdrPercentileRanksAggregate | AggregationsTDigestPercentilesAggregate | AggregationsTDigestPercentileRanksAggregate | AggregationsPercentilesBucketAggregate | AggregationsMedianAbsoluteDeviationAggregate | AggregationsMinAggregate | AggregationsMaxAggregate | AggregationsSumAggregate | AggregationsAvgAggregate | AggregationsWeightedAvgAggregate | AggregationsValueCountAggregate | AggregationsSimpleValueAggregate | AggregationsDerivativeAggregate | AggregationsBucketMetricValueAggregate | AggregationsChangePointAggregate | AggregationsStatsAggregate | AggregationsStatsBucketAggregate | AggregationsExtendedStatsAggregate | AggregationsExtendedStatsBucketAggregate | AggregationsCartesianBoundsAggregate | AggregationsCartesianCentroidAggregate | AggregationsGeoBoundsAggregate | AggregationsGeoCentroidAggregate | AggregationsHistogramAggregate | AggregationsDateHistogramAggregate | AggregationsAutoDateHistogramAggregate | AggregationsVariableWidthHistogramAggregate | AggregationsStringTermsAggregate | AggregationsLongTermsAggregate | AggregationsDoubleTermsAggregate | AggregationsUnmappedTermsAggregate | AggregationsLongRareTermsAggregate | AggregationsStringRareTermsAggregate | AggregationsUnmappedRareTermsAggregate | AggregationsMultiTermsAggregate | AggregationsMissingAggregate | AggregationsNestedAggregate | AggregationsReverseNestedAggregate | AggregationsGlobalAggregate | AggregationsFilterAggregate | AggregationsChildrenAggregate | AggregationsParentAggregate | AggregationsSamplerAggregate | AggregationsUnmappedSamplerAggregate | AggregationsGeoHashGridAggregate | AggregationsGeoTileGridAggregate | AggregationsGeoHexGridAggregate | AggregationsRangeAggregate | AggregationsDateRangeAggregate | AggregationsGeoDistanceAggregate | AggregationsIpRangeAggregate | AggregationsIpPrefixAggregate | AggregationsFiltersAggregate | AggregationsAdjacencyMatrixAggregate | AggregationsSignificantLongTermsAggregate | AggregationsSignificantStringTermsAggregate | AggregationsUnmappedSignificantTermsAggregate | AggregationsCompositeAggregate | AggregationsFrequentItemSetsAggregate | AggregationsTimeSeriesAggregate | AggregationsScriptedMetricAggregate | AggregationsTopHitsAggregate | AggregationsInferenceAggregate | AggregationsStringStatsAggregate | AggregationsBoxPlotAggregate | AggregationsTopMetricsAggregate | AggregationsTTestAggregate | AggregationsRateAggregate | AggregationsCumulativeCardinalityAggregate | AggregationsMatrixStatsAggregate | AggregationsGeoLineAggregate export interface AggregationsAggregateBase { meta?: Metadata @@ -4984,9 +4997,19 @@ export interface AggregationsAggregationContainer { bucket_correlation?: AggregationsBucketCorrelationAggregation /** A single-value metrics aggregation that calculates an approximate count of distinct values. */ cardinality?: AggregationsCardinalityAggregation + /** A metric aggregation that computes the spatial bounding box containing all values for a Point or Shape field. */ + cartesian_bounds?: AggregationsCartesianBoundsAggregation + /** A metric aggregation that computes the weighted centroid from all coordinate values for point and shape fields. */ + cartesian_centroid?: AggregationsCartesianCentroidAggregation /** A multi-bucket aggregation that groups semi-structured text into buckets. * @experimental */ categorize_text?: AggregationsCategorizeTextAggregation + /** A sibling pipeline that detects, spikes, dips, and change points in a metric. + * Given a distribution of values provided by the sibling multi-bucket aggregation, + * this aggregation indicates the bucket of any spike or dip and/or the bucket at which + * the largest change in the distribution of values, if they are statistically significant. + * There must be at least 22 bucketed values. Fewer than 1,000 is preferred. */ + change_point?: AggregationsChangePointAggregation /** A single bucket aggregation that selects child documents that have the specified type, as defined in a `join` field. */ children?: AggregationsChildrenAggregation /** A multi-bucket aggregation that creates composite buckets from different sources. @@ -5012,6 +5035,9 @@ export interface AggregationsAggregationContainer { extended_stats_bucket?: AggregationsExtendedStatsBucketAggregation /** A bucket aggregation which finds frequent item sets, a form of association rules mining that identifies items that often occur together. */ frequent_item_sets?: AggregationsFrequentItemSetsAggregation + /** A bucket aggregation which finds frequent item sets, a form of association rules mining that identifies items that often occur together. + * @alias frequent_item_sets */ + frequent_items?: AggregationsFrequentItemSetsAggregation /** A single bucket aggregation that narrows the set of documents to those that match a query. */ filter?: QueryDslQueryContainer /** A multi-bucket aggregation where each bucket contains the documents that match a query. */ @@ -5313,6 +5339,21 @@ export interface AggregationsCardinalityAggregation extends AggregationsMetricAg export type AggregationsCardinalityExecutionMode = 'global_ordinals' | 'segment_ordinals' | 'direct' | 'save_memory_heuristic' | 'save_time_heuristic' +export interface AggregationsCartesianBoundsAggregate extends AggregationsAggregateBase { + bounds?: TopLeftBottomRightGeoBounds +} + +export interface AggregationsCartesianBoundsAggregation extends AggregationsMetricAggregationBase { +} + +export interface AggregationsCartesianCentroidAggregate extends AggregationsAggregateBase { + count: long + location?: CartesianPoint +} + +export interface AggregationsCartesianCentroidAggregation extends AggregationsMetricAggregationBase { +} + export interface AggregationsCategorizeTextAggregation { /** The semi-structured text field to categorize. */ field: Field @@ -5351,6 +5392,31 @@ export interface AggregationsCategorizeTextAggregation { export type AggregationsCategorizeTextAnalyzer = string | AggregationsCustomCategorizeTextAnalyzer +export interface AggregationsChangePointAggregate extends AggregationsAggregateBase { + type: AggregationsChangeType + bucket?: AggregationsChangePointBucket +} + +export interface AggregationsChangePointAggregation extends AggregationsPipelineAggregationBase { +} + +export interface AggregationsChangePointBucketKeys extends AggregationsMultiBucketBase { + key: FieldValue +} +export type AggregationsChangePointBucket = AggregationsChangePointBucketKeys +& { [property: string]: AggregationsAggregate | FieldValue | long } + +export interface AggregationsChangeType { + dip?: AggregationsDip + distribution_change?: AggregationsDistributionChange + indeterminable?: AggregationsIndeterminable + non_stationary?: AggregationsNonStationary + spike?: AggregationsSpike + stationary?: AggregationsStationary + step_change?: AggregationsStepChange + trend_change?: AggregationsTrendChange +} + export interface AggregationsChiSquareHeuristic { /** Set to `false` if you defined a custom background filter that represents a different set of documents that you want to compare to. */ background_is_superset: boolean @@ -5532,6 +5598,12 @@ export interface AggregationsDerivativeAggregate extends AggregationsSingleMetri export interface AggregationsDerivativeAggregation extends AggregationsPipelineAggregationBase { } +export interface AggregationsDip extends AggregationsAbstractChangePoint { +} + +export interface AggregationsDistributionChange extends AggregationsAbstractChangePoint { +} + export interface AggregationsDiversifiedSamplerAggregation extends AggregationsBucketAggregationBase { /** The type of value used for de-duplication. */ execution_hint?: AggregationsSamplerAggregationExecutionHint @@ -5899,6 +5971,10 @@ export interface AggregationsHoltWintersMovingAverageAggregation extends Aggrega export type AggregationsHoltWintersType = 'add' | 'mult' +export interface AggregationsIndeterminable { + reason: string +} + export interface AggregationsInferenceAggregateKeys extends AggregationsAggregateBase { value?: FieldValue feature_importance?: AggregationsInferenceFeatureImportance[] @@ -6197,6 +6273,12 @@ export interface AggregationsNestedAggregation extends AggregationsBucketAggrega path?: Field } +export interface AggregationsNonStationary { + p_value: double + r_value: double + trend: string +} + export interface AggregationsNormalizeAggregation extends AggregationsPipelineAggregationBase { /** The specific method to apply. */ method?: AggregationsNormalizeMethod @@ -6530,6 +6612,9 @@ export interface AggregationsSingleMetricAggregateBase extends AggregationsAggre value_as_string?: string } +export interface AggregationsSpike extends AggregationsAbstractChangePoint { +} + export interface AggregationsStandardDeviationBounds { upper: double | null lower: double | null @@ -6548,6 +6633,9 @@ export interface AggregationsStandardDeviationBoundsAsString { lower_sampling: string } +export interface AggregationsStationary { +} + export interface AggregationsStatsAggregate extends AggregationsAggregateBase { count: long min: double | null @@ -6569,6 +6657,9 @@ export interface AggregationsStatsBucketAggregate extends AggregationsStatsAggre export interface AggregationsStatsBucketAggregation extends AggregationsPipelineAggregationBase { } +export interface AggregationsStepChange extends AggregationsAbstractChangePoint { +} + export interface AggregationsStringRareTermsAggregate extends AggregationsMultiBucketAggregateBase { } @@ -6790,6 +6881,12 @@ export interface AggregationsTopMetricsValue { field: Field } +export interface AggregationsTrendChange { + p_value: double + r_value: double + change_point: integer +} + export interface AggregationsUnmappedRareTermsAggregate extends AggregationsMultiBucketAggregateBase { } @@ -8227,8 +8324,12 @@ export interface MappingDenseVectorIndexOptions { m?: integer /** The type of kNN algorithm to use. */ type: MappingDenseVectorIndexOptionsType - /** The rescore vector options. This is only applicable to `bbq_hnsw`, `int4_hnsw`, `int8_hnsw`, `bbq_flat`, `int4_flat`, and `int8_flat` index types. */ + /** The rescore vector options. This is only applicable to `bbq_disk`, `bbq_hnsw`, `int4_hnsw`, `int8_hnsw`, `bbq_flat`, `int4_flat`, and `int8_flat` index types. */ rescore_vector?: MappingDenseVectorIndexOptionsRescoreVector + /** `true` if vector rescoring should be done on-disk + * + * Only applicable to `bbq_hnsw` */ + on_disk_rescore?: boolean } export interface MappingDenseVectorIndexOptionsRescoreVector { @@ -8239,7 +8340,7 @@ export interface MappingDenseVectorIndexOptionsRescoreVector { oversample: float } -export type MappingDenseVectorIndexOptionsType = 'bbq_flat' | 'bbq_hnsw' | 'flat' | 'hnsw' | 'int4_flat' | 'int4_hnsw' | 'int8_flat' | 'int8_hnsw' +export type MappingDenseVectorIndexOptionsType = 'bbq_flat' | 'bbq_hnsw' | 'bbq_disk' | 'flat' | 'hnsw' | 'int4_flat' | 'int4_hnsw' | 'int8_flat' | 'int8_hnsw' export interface MappingDenseVectorProperty extends MappingPropertyBase { type: 'dense_vector' @@ -8663,7 +8764,7 @@ export interface MappingSemanticTextProperty { /** Settings for chunking text into smaller passages. If specified, these will override the * chunking settings sent in the inference endpoint associated with inference_id. If chunking settings are updated, * they will not be applied to existing documents until they are reindexed. */ - chunking_settings?: MappingChunkingSettings + chunking_settings?: MappingChunkingSettings | null /** Multi-fields allow the same string value to be indexed in multiple ways for different purposes, such as one * field for search and a multi-field for sorting and aggregations, or the same string value analyzed by different analyzers. */ fields?: Record @@ -11995,7 +12096,8 @@ export interface CatMlDataFrameAnalyticsDataFrameAnalyticsRecord { export interface CatMlDataFrameAnalyticsRequest extends CatCatRequestBase { /** The ID of the data frame analytics to fetch */ id?: Id - /** Whether to ignore if a wildcard expression matches no configs. (This includes `_all` string or when no configs have been specified) */ + /** Whether to ignore if a wildcard expression matches no configs. + * (This includes `_all` string or when no configs have been specified.) */ allow_no_match?: boolean /** Comma-separated list of column names to display. */ h?: CatCatDfaColumns @@ -16866,6 +16968,18 @@ export interface ClusterStatsDenseVectorStats { off_heap?: ClusterStatsDenseVectorOffHeapStats } +export interface ClusterStatsExtendedRetrieversSearchUsage { + text_similarity_reranker?: ClusterStatsExtendedTextSimilarityRetrieverUsage +} + +export interface ClusterStatsExtendedSearchUsage { + retrievers?: ClusterStatsExtendedRetrieversSearchUsage +} + +export interface ClusterStatsExtendedTextSimilarityRetrieverUsage { + chunk_rescorer?: long +} + export interface ClusterStatsFieldTypes { /** The name for the field type in selected nodes. */ name: Name @@ -17061,6 +17175,7 @@ export interface ClusterStatsSearchUsageStats { rescorers: Record sections: Record retrievers: Record + extended: ClusterStatsExtendedSearchUsage } export type ClusterStatsShardState = 'INIT' | 'SUCCESS' | 'FAILED' | 'ABORTED' | 'MISSING' | 'WAITING' | 'QUEUED' | 'PAUSED_FOR_NODE_REMOVAL' @@ -17362,7 +17477,7 @@ export interface ConnectorCheckInResponse { export interface ConnectorDeleteRequest extends RequestBase { /** The unique identifier of the connector to be deleted */ connector_id: Id - /** A flag indicating if associated sync jobs should be also removed. Defaults to false. */ + /** A flag indicating if associated sync jobs should be also removed. */ delete_sync_jobs?: boolean /** A flag indicating if the connector should be hard deleted. */ hard?: boolean @@ -17413,7 +17528,7 @@ export interface ConnectorLastSyncResponse { } export interface ConnectorListRequest extends RequestBase { - /** Starting offset (default: 0) */ + /** Starting offset */ from?: integer /** Specifies a max number of results to get */ size?: integer @@ -17555,7 +17670,7 @@ export interface ConnectorSyncJobGetRequest extends RequestBase { export type ConnectorSyncJobGetResponse = ConnectorConnectorSyncJob export interface ConnectorSyncJobListRequest extends RequestBase { - /** Starting offset (default: 0) */ + /** Starting offset */ from?: integer /** Specifies a max number of results to get */ size?: integer @@ -22359,7 +22474,7 @@ export interface IndicesStatsRequest extends RequestBase { include_segment_file_sizes?: boolean /** If true, the response includes information from segments that are not loaded into memory. */ include_unloaded_segments?: boolean - /** Indicates whether statistics are aggregated at the cluster, index, or shard level. */ + /** Indicates whether statistics are aggregated at the cluster, indices, or shards level. */ level?: Level /** All values in `body` will be added to the request body. */ body?: string | { [key: string]: any } & { metric?: never, index?: never, completion_fields?: never, expand_wildcards?: never, fielddata_fields?: never, fields?: never, forbid_closed_indices?: never, groups?: never, include_segment_file_sizes?: never, include_unloaded_segments?: never, level?: never } @@ -32283,7 +32398,7 @@ export interface NodesStatsRequest extends RequestBase { groups?: boolean /** If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested). */ include_segment_file_sizes?: boolean - /** Indicates whether statistics are aggregated at the cluster, index, or shard level. */ + /** Indicates whether statistics are aggregated at the node, indices, or shards level. */ level?: NodeStatsLevel /** Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. */ timeout?: Duration @@ -33185,6 +33300,11 @@ export interface SecurityApiKey { * At least one of them must be specified. * When specified, the new access assignment fully replaces the previously assigned access. */ access?: SecurityAccess + /** The certificate identity associated with a cross-cluster API key. + * Restricts the API key to connections authenticated by a specific TLS certificate. + * Only applicable to cross-cluster API keys. + * @remarks This property is not supported on Elastic Cloud Serverless. */ + certificate_identity?: string /** The profile uid for the API key owner principal, if requested and if it exists */ profile_uid?: string /** Sorting values when using the `sort` parameter with the `security.query_api_keys` API. */ @@ -33805,10 +33925,14 @@ export interface SecurityCreateCrossClusterApiKeyRequest extends RequestBase { metadata?: Metadata /** Specifies the name for this API key. */ name: Name + /** The certificate identity to associate with this API key. + * This field is used to restrict the API key to connections authenticated by a specific TLS certificate. + * The value should match the certificate's distinguished name (DN) pattern. */ + certificate_identity?: string /** All values in `body` will be added to the request body. */ - body?: string | { [key: string]: any } & { access?: never, expiration?: never, metadata?: never, name?: never } + body?: string | { [key: string]: any } & { access?: never, expiration?: never, metadata?: never, name?: never, certificate_identity?: never } /** All values in `querystring` will be added to the request querystring. */ - querystring?: { [key: string]: any } & { access?: never, expiration?: never, metadata?: never, name?: never } + querystring?: { [key: string]: any } & { access?: never, expiration?: never, metadata?: never, name?: never, certificate_identity?: never } } export interface SecurityCreateCrossClusterApiKeyResponse { @@ -35334,10 +35458,17 @@ export interface SecurityUpdateCrossClusterApiKeyRequest extends RequestBase { * Within the metadata object, keys beginning with `_` are reserved for system usage. * When specified, this information fully replaces metadata previously associated with the API key. */ metadata?: Metadata + /** The certificate identity to associate with this API key. + * This field is used to restrict the API key to connections authenticated by a specific TLS certificate. + * The value should match the certificate's distinguished name (DN) pattern. + * When specified, this fully replaces any previously assigned certificate identity. + * To clear an existing certificate identity, explicitly set this field to `null`. + * When omitted, the existing certificate identity remains unchanged. */ + certificate_identity?: string /** All values in `body` will be added to the request body. */ - body?: string | { [key: string]: any } & { id?: never, access?: never, expiration?: never, metadata?: never } + body?: string | { [key: string]: any } & { id?: never, access?: never, expiration?: never, metadata?: never, certificate_identity?: never } /** All values in `querystring` will be added to the request querystring. */ - querystring?: { [key: string]: any } & { id?: never, access?: never, expiration?: never, metadata?: never } + querystring?: { [key: string]: any } & { id?: never, access?: never, expiration?: never, metadata?: never, certificate_identity?: never } } export interface SecurityUpdateCrossClusterApiKeyResponse { @@ -38000,6 +38131,12 @@ export interface TransformSettings { * exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is `10` and the * maximum is `65,536`. */ max_page_search_size?: integer + /** Specifies whether the transform checkpoint will use the Point In Time API while searching over the source index. + * In general, Point In Time is an optimization that will reduce pressure on the source index by reducing the amount + * of refreshes and merges, but it can be expensive if a large number of Point In Times are opened and closed for a + * given index. The benefits and impact depend on the data being searched, the ingest rate into the source index, and + * the amount of other consumers searching the same source index. */ + use_point_in_time?: boolean /** If `true`, the transform runs in unattended mode. In unattended mode, the transform retries indefinitely in case * of an error which means the transform never fails. Setting the number of retries other than infinite fails in * validation. */