diff --git a/elasticsearch/_async/client/__init__.py b/elasticsearch/_async/client/__init__.py index 802ec316f..0347cd37f 100644 --- a/elasticsearch/_async/client/__init__.py +++ b/elasticsearch/_async/client/__init__.py @@ -644,9 +644,12 @@ async def bulk( ] = None, ) -> ObjectApiResponse[t.Any]: """ - Bulk index or delete documents. Performs multiple indexing or delete operations - in a single API call. This reduces overhead and can greatly increase indexing - speed. + .. raw:: html + +
Bulk index or delete documents. + Performs multiple indexing or delete operations in a single API call. + This reduces overhead and can greatly increase indexing speed.
+ `Clear a scrolling search.
+Clear the search context and results for a scrolling search.
+ `Close a point in time.
+A point in time must be opened explicitly before being used in search requests.
+ The keep_alive parameter tells Elasticsearch how long it should persist.
+ A point in time is automatically closed when the keep_alive period has elapsed.
+ However, keeping points in time has a cost; close them as soon as they are no longer required for search requests.
Count search results. + Get the number of documents matching a query.
+ `Index a document. + Adds a JSON document to the specified data stream or index and makes it searchable. + If the target is an index and the document already exists, the request updates the document and increments its version.
+ `Delete a document. + Removes a JSON document from the specified index.
+ `Delete documents. + Deletes documents that match the specified query.
+ `Throttle a delete by query operation.
+Change the number of requests per second for a particular delete by query operation. + Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
+ `Delete a script or search template. + Deletes a stored script or search template.
+ `Check a document. + Checks if a specified document exists.
+ `Check for a document source.
+ Checks if a document's _source is stored.
Explain a document match result. + Returns information about why a specific document matches, or doesn’t match, a query.
+ `Get the field capabilities.
+Get information about the capabilities of fields among multiple indices.
+For data streams, the API returns field capabilities among the stream’s backing indices.
+ It returns runtime fields like any other field.
+ For example, a runtime field with a type of keyword is returned the same as any other field that belongs to the keyword family.
Get a document by its ID. + Retrieves the document with the specified ID from an index.
+ `Get a script or search template. + Retrieves a stored script or search template.
+ `Get script contexts.
+Get a list of supported script contexts and their methods.
+ `Get script languages.
+Get a list of available script types, languages, and contexts.
+ `Get a document's source. + Returns the source of a document.
+ `Get the cluster health. + Get a report with the health status of an Elasticsearch cluster. + The report contains a list of indicators that compose Elasticsearch functionality.
+Each indicator has a health status of: green, unknown, yellow or red. + The indicator will provide an explanation and metadata describing the reason for its current health status.
+The cluster’s status is controlled by the worst indicator status.
+In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. + Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.
+Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. + The root cause and remediation steps are encapsulated in a diagnosis. + A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.
+NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. + When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.
+ `Index a document. + Adds a JSON document to the specified data stream or index and makes it searchable. + If the target is an index and the document already exists, the request updates the document and increments its version.
+ `Get cluster info. + Returns basic information about the cluster.
+ `Run a knn search.
+NOTE: The kNN search API has been replaced by the knn option in the search API.
Perform a k-nearest neighbor (kNN) search on a dense_vector field and return the matching documents. + Given a query vector, the API finds the k closest vectors and returns those documents as search hits.
+Elasticsearch uses the HNSW algorithm to support efficient kNN search. + Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. + This means the results returned are not always the true k closest neighbors.
+The kNN search API supports restricting the search using a filter. + The search will return the top k documents that also match the filter query.
+ `Get multiple documents.
+Get multiple JSON documents by ID from one or more indices. + If you specify an index in the request URI, you only need to specify the document IDs in the request body. + To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.
+ `Run multiple searches.
+The format of the request is similar to the bulk API format and makes use of the newline delimited JSON (NDJSON) format. + The structure is as follows:
+header\\n
+ body\\n
+ header\\n
+ body\\n
+
+ This structure is specifically optimized to reduce parsing if a specific search ends up redirected to another node.
+IMPORTANT: The final line of data must end with a newline character \\n.
+ Each newline character may be preceded by a carriage return \\r.
+ When sending requests to this endpoint the Content-Type header should be set to application/x-ndjson.
Run multiple templated searches.
+ `Get multiple term vectors.
+You can specify existing documents by index and ID or provide artificial documents in the body of the request.
+ You can specify the index in the request body or request URI.
+ The response contains a docs array with all the fetched termvectors.
+ Each element has the structure provided by the termvectors API.
Open a point in time.
+A search request by default runs against the most recent visible data of the target indices,
+ which is called point in time. Elasticsearch pit (point in time) is a lightweight view into the
+ state of the data as it existed when initiated. In some cases, it’s preferred to perform multiple
+ search requests using the same point in time. For example, if refreshes happen between
+ search_after requests, then the results of those requests might not be consistent as changes happening
+ between searches are only visible to the more recent point in time.
A point in time must be opened explicitly before being used in search requests.
+ The keep_alive parameter tells Elasticsearch how long it should persist.
Create or update a script or search template. + Creates or updates a stored script or search template.
+ `Evaluate ranked search results.
+Evaluate the quality of ranked search results over a set of typical search queries.
+ `Reindex documents. + Copies documents from a source to a destination. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself.
+ `Throttle a reindex operation.
+Change the number of requests per second for a particular reindex operation.
+ `Render a search template.
+Render a search template as a search request body.
+ `Run a script. + Runs a script and returns a result.
+ `Run a scrolling search.
+IMPORTANT: The scroll API is no longer recommend for deep pagination. If you need to preserve the index state while paging through more than 10,000 hits, use the search_after parameter with a point in time (PIT).
The scroll API gets large sets of results from a single scrolling search request.
+ To get the necessary scroll ID, submit a search API request that includes an argument for the scroll query parameter.
+ The scroll parameter indicates how long Elasticsearch should retain the search context for the request.
+ The search response returns a scroll ID in the _scroll_id response body parameter.
+ You can then use the scroll ID with the scroll API to retrieve the next batch of results for the request.
+ If the Elasticsearch security features are enabled, the access to the results of a specific scroll ID is restricted to the user or API key that submitted the search.
You can also use the scroll API to specify a new scroll parameter that extends or shortens the retention period for the search context.
+IMPORTANT: Results from a scrolling search reflect the state of the index at the time of the initial search request. Subsequent indexing or document changes only affect later search and scroll requests.
+ `Run a search.
+Get search hits that match the query defined in the request.
+ You can provide search queries using the q query string parameter or the request body.
+ If both are specified, only the query parameter is used.
Search a vector tile.
+Search a vector tile for geospatial values.
+ `Get the search shards.
+Get the indices and shards that a search request would be run against. + This information can be useful for working out issues or planning optimizations with routing and shard preferences. + When filtered aliases are used, the filter is returned as part of the indices section.
+ `Run a search with a search template.
+ `Get terms in an index.
+Discover terms that match a partial string in an index. + This "terms enum" API is designed for low-latency look-ups used in auto-complete scenarios.
+If the complete property in the response is false, the returned terms set may be incomplete and should be treated as approximate.
+ This can occur due to a few reasons, such as a request timeout or a node error.
NOTE: The terms enum API may return terms from deleted documents. Deleted documents are initially only marked as deleted. It is not until their segments are merged that documents are actually deleted. Until that happens, the terms enum API will return terms from these documents.
+ `Get term vector information.
+Get information and statistics about terms in the fields of a particular document.
+ `Update a document. + Updates a document by running a script or passing a partial document.
+ `Update documents. + Updates documents that match the specified query. + If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.
+ `Throttle an update by query operation.
+Change the number of requests per second for a particular update by query operation. + Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
+ `Delete an async search.
+If the asynchronous search is still running, it is cancelled.
+ Otherwise, the saved search results are deleted.
+ If the Elasticsearch security features are enabled, the deletion of a specific async search is restricted to: the authenticated user that submitted the original search request; users that have the cancel_task cluster privilege.
Get async search results.
+Retrieve the results of a previously submitted asynchronous search request. + If the Elasticsearch security features are enabled, access to the results of a specific async search is restricted to the user or API key that submitted it.
+ `Get the async search status.
+Get the status of a previously submitted async search request given its identifier, without retrieving search results.
+ If the Elasticsearch security features are enabled, use of this API is restricted to the monitoring_user role.
Run an async search.
+When the primary sort of the results is an indexed field, shards get sorted based on minimum and maximum value that they hold for that field. Partial results become available following the sort criteria that was requested.
+Warning: Asynchronous search does not support scroll or search requests that include only the suggest section.
+By default, Elasticsearch does not allow you to store an async search response larger than 10Mb and an attempt to do this results in an error.
+ The maximum allowed size for a stored async search response can be set by changing the search.max_async_search_response_size cluster level setting.
Delete an autoscaling policy.
+NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
+ `Get the autoscaling capacity.
+NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
+This API gets the current autoscaling capacity based on the configured autoscaling policy. + It will return information to size the cluster appropriately to the current workload.
+The required_capacity is calculated as the maximum of the required_capacity result of all individual deciders that are enabled for the policy.
The operator should verify that the current_nodes match the operator’s knowledge of the cluster to avoid making autoscaling decisions based on stale or incomplete information.
The response contains decider-specific information you can use to diagnose how and why autoscaling determined a certain capacity was required. + This information is provided for diagnosis only. + Do not use this information to make autoscaling decisions.
+ `Get an autoscaling policy.
+NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
+ `Create or update an autoscaling policy.
+NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
+ `Get aliases. + Retrieves the cluster’s index aliases, including filter and routing information. + The API does not return data stream aliases.
+CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
+ `Provides a snapshot of the number of shards allocated to each data node and their disk space. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
+ `Get component templates. + Returns information about component templates in a cluster. + Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
+CAT APIs are only intended for human consumption using the command line or Kibana console. + They are not intended for use by applications. For application consumption, use the get component template API.
+ `Get a document count. + Provides quick access to a document count for a data stream, an index, or an entire cluster. + The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.
+CAT APIs are only intended for human consumption using the command line or Kibana console. + They are not intended for use by applications. For application consumption, use the count API.
+ `Returns the amount of heap memory currently used by the field data cache on every data node in the cluster. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. + They are not intended for use by applications. For application consumption, use the nodes stats API.
+ `Returns the health status of a cluster, similar to the cluster health API.
+ IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console.
+ They are not intended for use by applications. For application consumption, use the cluster health API.
+ This API is often used to check malfunctioning clusters.
+ To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats:
+ HH:MM:SS, which is human-readable but includes no date information;
+ Unix epoch time, which is machine-sortable and includes date information.
+ The latter format is useful for cluster recoveries that take multiple days.
+ You can use the cat health API to verify cluster health across multiple nodes.
+ You also can use the API to track the recovery of a large cluster over a longer period of time.
Get CAT help. + Returns help for the CAT APIs.
+ `Get index information. + Returns high-level information about indices in a cluster, including backing indices for data streams.
+Use this request to get the following information for each index in a cluster:
+These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. + To get an accurate count of Elasticsearch documents, use the cat count or count APIs.
+CAT APIs are only intended for human consumption using the command line or Kibana console. + They are not intended for use by applications. For application consumption, use an index endpoint.
+ `Returns information about the master node, including the ID, bound IP address, and name. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
+ `Get data frame analytics jobs. + Returns configuration and usage information about data frame analytics jobs.
+CAT APIs are only intended for human consumption using the Kibana + console or command line. They are not intended for use by applications. For + application consumption, use the get data frame analytics jobs statistics API.
+ `Get datafeeds.
+ Returns configuration and usage information about datafeeds.
+ This API returns a maximum of 10,000 datafeeds.
+ If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage
+ cluster privileges to use this API.
CAT APIs are only intended for human consumption using the Kibana + console or command line. They are not intended for use by applications. For + application consumption, use the get datafeed statistics API.
+ `Get anomaly detection jobs.
+ Returns configuration and usage information for anomaly detection jobs.
+ This API returns a maximum of 10,000 jobs.
+ If the Elasticsearch security features are enabled, you must have monitor_ml,
+ monitor, manage_ml, or manage cluster privileges to use this API.
CAT APIs are only intended for human consumption using the Kibana + console or command line. They are not intended for use by applications. For + application consumption, use the get anomaly detection job statistics API.
+ `Get trained models. + Returns configuration and usage information about inference trained models.
+CAT APIs are only intended for human consumption using the Kibana + console or command line. They are not intended for use by applications. For + application consumption, use the get trained models statistics API.
+ `Returns information about custom node attributes. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
+ `Returns information about the nodes in a cluster. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
+ `Returns cluster-level changes that have not yet been executed. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.
+ `Returns a list of plugins running on each node of a cluster. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
+ `Returns information about ongoing and completed shard recoveries. + Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. + For data streams, the API returns information about the stream’s backing indices. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.
+ `Returns the snapshot repositories for a cluster. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.
+ `Returns low-level information about the Lucene segments in index shards. + For data streams, the API returns information about the backing indices. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.
+ `Returns information about the shards in a cluster. + For data streams, the API returns information about the backing indices. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
+ `Returns information about the snapshots stored in one or more repositories. + A snapshot is a backup of an index or running Elasticsearch cluster. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.
+ `Returns information about tasks currently executing in the cluster. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.
+ `Returns information about index templates in a cluster. + You can use index templates to apply index settings and field mappings to new indices at creation. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
+ `Returns thread pool statistics for each node in a cluster. + Returned information includes all built-in thread pools and custom thread pools. + IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
+ `Get transforms. + Returns configuration and usage information about transforms.
+CAT APIs are only intended for human consumption using the Kibana + console or command line. They are not intended for use by applications. For + application consumption, use the get transform statistics API.
+ `Deletes auto-follow patterns.
+ `Creates a new follower index configured to follow the referenced leader index.
+ `Retrieves information about all follower indices, including parameters and status for each follower index
+ `Retrieves follower stats. return shard-level stats about the following tasks associated with each shard for the specified indices.
+ `Removes the follower retention leases from the leader.
+ `Gets configured auto-follow patterns. Returns the specified auto-follow pattern collection.
+ `Pauses an auto-follow pattern
+ `Pauses a follower index. The follower index will not fetch any additional operations from the leader index.
+ `Creates a new named collection of auto-follow patterns against a specified remote cluster. Newly created indices on the remote cluster matching any of the specified patterns will be automatically configured as follower indices.
+ `Resumes an auto-follow pattern that has been paused
+ `Resumes a follower index that has been paused
+ `Gets all stats related to cross-cluster replication.
+ `Stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication.
+ `Explain the shard allocations. + Get explanations for shard allocations in the cluster. + For unassigned shards, it provides an explanation for why the shard is unassigned. + For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. + This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
+ `Delete component templates. + Deletes component templates. + Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
+ `Clear cluster voting config exclusions. + Remove master-eligible nodes from the voting configuration exclusion list.
+ `Check component templates. + Returns information about whether a particular component template exists.
+ `Get component templates. + Retrieves information about component templates.
+ `Get cluster-wide settings. + By default, it returns only settings that have been explicitly defined.
+ `Get the cluster health status. + You can also use the API to get the health status of only specified data streams and indices. + For data streams, the API retrieves the health status of the stream’s backing indices.
+The cluster health status is: green, yellow or red. + On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. + The index level status is controlled by the worst shard status.
+One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. + The cluster status is controlled by the worst index status.
+ `Get cluster info. + Returns basic information about the cluster.
+ `Get the pending cluster tasks. + Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect.
+NOTE: This API returns a list of any pending updates to the cluster state. + These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. + However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.
+ `Update voting configuration exclusions. + Update the cluster voting config exclusions by node IDs or node names. + By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. + If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. + The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. + It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.
+Clusters should have no voting configuration exclusions in normal operation.
+ Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions.
+ This API waits for the nodes to be fully removed from the cluster before it returns.
+ If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.
A response to POST /_cluster/voting_config_exclusions with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions.
+ If the call to POST /_cluster/voting_config_exclusions fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration.
+ In that case, you may safely retry the call.
NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. + They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.
+ `Create or update a component template. + Creates or updates a component template. + Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
+An index template can be composed of multiple component templates.
+ To use a component template, specify it in an index template’s composed_of list.
+ Component templates are only applied to new data streams and indices as part of a matching index template.
Settings and mappings specified directly in the index template or the create index request override any settings or mappings specified in a component template.
+Component templates are only used during index creation. + For data streams, this includes data stream creation and the creation of a stream’s backing indices. + Changes to component templates do not affect existing indices, including a stream’s backing indices.
+You can use C-style /* *\\/ block comments in component templates.
+ You can include comments anywhere in the request body except before the opening curly bracket.
Update the cluster settings.
+ Configure and update dynamic settings on a running cluster.
+ You can also configure dynamic settings locally on an unstarted or shut down node in elasticsearch.yml.
Updates made with this API can be persistent, which apply across cluster restarts, or transient, which reset after a cluster restart. + You can also reset transient or persistent settings by assigning them a null value.
+If you configure the same setting using multiple methods, Elasticsearch applies the settings in following order of precedence: 1) Transient setting; 2) Persistent setting; 3) elasticsearch.yml setting; 4) Default setting value.
+ For example, you can apply a transient setting to override a persistent setting or elasticsearch.yml setting.
+ However, a change to an elasticsearch.yml setting will not override a defined transient or persistent setting.
TIP: In Elastic Cloud, use the user settings feature to configure all cluster settings. This method automatically rejects unsafe settings that could break your cluster.
+ If you run Elasticsearch on your own hardware, use this API to configure dynamic cluster settings.
+ Only use elasticsearch.yml for static cluster settings and node settings.
+ The API doesn’t require a restart and ensures a setting’s value is the same on all nodes.
WARNING: Transient cluster settings are no longer recommended. Use persistent cluster settings instead. + If a cluster becomes unstable, transient settings can clear unexpectedly, resulting in a potentially undesired cluster configuration.
+ `Get remote cluster information. + Get all of the configured remote cluster information. + This API returns connection and endpoint information keyed by the configured remote cluster alias.
+ `Reroute the cluster. + Manually change the allocation of individual shards in the cluster. + For example, a shard can be moved from one node to another explicitly, an allocation can be canceled, and an unassigned shard can be explicitly allocated to a specific node.
+It is important to note that after processing any reroute commands Elasticsearch will perform rebalancing as normal (respecting the values of settings such as cluster.routing.rebalance.enable) in order to remain in a balanced state.
+ For example, if the requested allocation includes moving a shard from node1 to node2 then this may cause a shard to be moved from node2 back to node1 to even things out.
The cluster can be set to disable allocations using the cluster.routing.allocation.enable setting.
+ If allocations are disabled then the only allocations that will be performed are explicit ones given using the reroute command, and consequent allocations due to rebalancing.
The cluster will attempt to allocate a shard a maximum of index.allocation.max_retries times in a row (defaults to 5), before giving up and leaving the shard unallocated.
+ This scenario can be caused by structural problems such as having an analyzer which refers to a stopwords file which doesn’t exist on all nodes.
Once the problem has been corrected, allocation can be manually retried by calling the reroute API with the ?retry_failed URI query parameter, which will attempt a single retry round for these shards.
Get the cluster state. + Get comprehensive information about the state of the cluster.
+The cluster state is an internal data structure which keeps track of a variety of information needed by every node, including the identity and attributes of the other nodes in the cluster; cluster-wide settings; index metadata, including the mapping and settings for each index; the location and status of every shard copy in the cluster.
+The elected master node ensures that every node in the cluster has a copy of the same cluster state. + This API lets you retrieve a representation of this internal state for debugging or diagnostic purposes. + You may need to consult the Elasticsearch source code to determine the precise meaning of the response.
+By default the API will route requests to the elected master node since this node is the authoritative source of cluster states.
+ You can also retrieve the cluster state held on the node handling the API request by adding the ?local=true query parameter.
Elasticsearch may need to expend significant effort to compute a response to this API in larger clusters, and the response may comprise a very large quantity of data. + If you use this API repeatedly, your cluster may become unstable.
+WARNING: The response is a representation of an internal data structure. + Its format is not subject to the same compatibility guarantees as other more stable APIs and may change from version to version. + Do not query this API using external monitoring tools. + Instead, obtain the information you require using other more stable cluster APIs.
+ `Get cluster statistics. + Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
+ `Check in a connector.
+Update the last_seen field in the connector and set it to the current timestamp.
Delete a connector.
+Removes a connector and associated sync jobs. + This is a destructive action that is not recoverable. + NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. + These need to be removed manually.
+ `Get a connector.
+Get the details about a connector.
+ `Update the connector last sync stats.
+Update the fields related to the last sync of a connector. + This action is used for analytics and monitoring.
+ `Get all connectors.
+Get information about all connectors.
+ `Create a connector.
+Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. + Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. + Self-managed connectors (Connector clients) are self-managed on your infrastructure.
+ `Create or update a connector.
+ `Cancel a connector sync job.
+Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at to the current time.
+ The connector service is then responsible for setting the status of connector sync jobs to cancelled.
Delete a connector sync job.
+Remove a connector sync job and its associated data. + This is a destructive action that is not recoverable.
+ `Get a connector sync job.
+ `Get all connector sync jobs.
+Get information about all stored connector sync jobs listed by their creation date in ascending order.
+ `Create a connector sync job.
+Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.
+ `Activate the connector draft filter.
+Activates the valid draft filtering for a connector.
+ `Update the connector API key ID.
+Update the api_key_id and api_key_secret_id fields of a connector.
+ You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored.
+ The connector secret ID is required only for Elastic managed (native) connectors.
+ Self-managed connectors (connector clients) do not use this field.
Update the connector configuration.
+Update the configuration field in the connector document.
+ `Update the connector error field.
+Set the error field for the connector. + If the error provided in the request body is non-null, the connector’s status is updated to error. + Otherwise, if the error is reset to null, the connector status is updated to connected.
+ `Update the connector filtering.
+Update the draft filtering configuration of a connector and marks the draft validation state as edited. + The filtering draft is activated once validated by the running Elastic connector service. + The filtering property is used to configure sync rules (both basic and advanced) for a connector.
+ `Update the connector draft filtering validation.
+Update the draft filtering validation info for a connector.
+ `Update the connector index name.
+Update the index_name field of a connector, specifying the index where the data ingested by the connector is stored.
Update the connector name and description.
+ `Update the connector is_native flag.
+ `Update the connector pipeline.
+When you create a new connector, the configuration of an ingest pipeline is populated with default settings.
+ `Update the connector scheduling.
+ `Update the connector service type.
+ `Update the connector status.
+ `Delete a dangling index.
+If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
+ For example, this can happen if you delete more than cluster.indices.tombstones.size indices while an Elasticsearch node is offline.
Import a dangling index.
+If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
+ For example, this can happen if you delete more than cluster.indices.tombstones.size indices while an Elasticsearch node is offline.
Get the dangling indices.
+If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
+ For example, this can happen if you delete more than cluster.indices.tombstones.size indices while an Elasticsearch node is offline.
Use this API to list dangling indices, which you can then import or delete.
+ `Delete an enrich policy. + Deletes an existing enrich policy and its enrich index.
+ `Run an enrich policy. + Create the enrich index for an existing enrich policy.
+ `Get an enrich policy. + Returns information about an enrich policy.
+ `Create an enrich policy. + Creates an enrich policy.
+ `Get enrich stats. + Returns enrich coordinator statistics and information about enrich policies that are currently executing.
+ `Delete an async EQL search. + Delete an async EQL search or a stored synchronous EQL search. + The API also deletes results for the search.
+ `Get async EQL search results. + Get the current status and available results for an async EQL search or a stored synchronous EQL search.
+ `Get the async EQL status. + Get the current status for an async EQL search or a stored synchronous EQL search without returning results.
+ `Get EQL search results. + Returns search results for an Event Query Language (EQL) query. + EQL assumes each document in a data stream or index corresponds to an event.
+ `Run an ES|QL query. + Get search results for an ES|QL (Elasticsearch query language) query.
+ `Gets a list of features which can be included in snapshots using the feature_states field when creating a snapshot
+ `Resets the internal state of features, usually by deleting system indices
+ `Returns the current global checkpoints for an index. This API is design for internal use by the fleet server project.
+ `Executes several fleet searches with a single API request. + The API follows the same structure as the multi search API. However, similar to the fleet search API, it + supports the wait_for_checkpoints parameter.
+ :param searches: :param index: A single target to search. If the target is an index alias, it @@ -378,9 +382,11 @@ async def search( body: t.Optional[t.Dict[str, t.Any]] = None, ) -> ObjectApiResponse[t.Any]: """ - The purpose of the fleet search api is to provide a search api where the search - will only be executed after provided checkpoint has been processed and is visible - for searches inside of Elasticsearch. + .. raw:: html + +The purpose of the fleet search api is to provide a search api where the search will only be executed + after provided checkpoint has been processed and is visible for searches inside of Elasticsearch.
+ :param index: A single target to search. If the target is an index alias, it must resolve to a single index. diff --git a/elasticsearch/_async/client/graph.py b/elasticsearch/_async/client/graph.py index df8f3fdbe..676720b7a 100644 --- a/elasticsearch/_async/client/graph.py +++ b/elasticsearch/_async/client/graph.py @@ -45,14 +45,15 @@ async def explore( body: t.Optional[t.Dict[str, t.Any]] = None, ) -> ObjectApiResponse[t.Any]: """ - Explore graph analytics. Extract and summarize information about the documents - and terms in an Elasticsearch data stream or index. The easiest way to understand - the behavior of this API is to use the Graph UI to explore connections. An initial - request to the `_explore` API contains a seed query that identifies the documents - of interest and specifies the fields that define the vertices and connections - you want to include in the graph. Subsequent requests enable you to spider out - from one more vertices of interest. You can exclude vertices that have already - been returned. + .. raw:: html + +Explore graph analytics.
+ Extract and summarize information about the documents and terms in an Elasticsearch data stream or index.
+ The easiest way to understand the behavior of this API is to use the Graph UI to explore connections.
+ An initial request to the _explore API contains a seed query that identifies the documents of interest and specifies the fields that define the vertices and connections you want to include in the graph.
+ Subsequent requests enable you to spider out from one more vertices of interest.
+ You can exclude vertices that have already been returned.
Delete a lifecycle policy. + You cannot delete policies that are currently in use. If the policy is being used to manage any indices, the request fails and returns an error.
+ `Explain the lifecycle state. + Get the current lifecycle status for one or more indices. + For data streams, the API retrieves the current lifecycle status for the stream's backing indices.
+The response indicates when the index entered each lifecycle state, provides the definition of the running phase, and information about any failures.
+ `Get lifecycle policies.
+ `Get the ILM status. + Get the current index lifecycle management status.
+ `Migrate to data tiers routing. + Switch the indices, ILM policies, and legacy, composable, and component templates from using custom node attributes and attribute-based allocation filters to using data tiers. + Optionally, delete one legacy index template. + Using node roles enables ILM to automatically move the indices between data tiers.
+Migrating away from custom node attributes routing can be manually performed. + This API provides an automated way of performing three out of the four manual steps listed in the migration guide:
+ILM must be stopped before performing the migration.
+ Use the stop ILM and get ILM status APIs to wait until the reported operation mode is STOPPED.
Move to a lifecycle step. + Manually move an index into a specific step in the lifecycle policy and run that step.
+WARNING: This operation can result in the loss of data. Manually moving an index into a specific step runs that step even if it has already been performed. This is a potentially destructive action and this should be considered an expert level API.
+You must specify both the current step and the step to be executed in the body of the request. + The request will fail if the current step does not match the step currently running for the index + This is to prevent the index from being moved from an unexpected step into the next step.
+When specifying the target (next_step) to which the index will be moved, either the name or both the action and name fields are optional.
+ If only the phase is specified, the index will move to the first step of the first action in the target phase.
+ If the phase and action are specified, the index will move to the first step of the specified action in the specified phase.
+ Only actions specified in the ILM policy are considered valid.
+ An index cannot move to a step that is not part of its policy.
Create or update a lifecycle policy. + If the specified policy exists, it is replaced and the policy version is incremented.
+NOTE: Only the latest version of the policy is stored, you cannot revert to previous versions.
+ `Remove policies from an index. + Remove the assigned lifecycle policies from an index or a data stream's backing indices. + It also stops managing the indices.
+ `Retry a policy. + Retry running the lifecycle policy for an index that is in the ERROR step. + The API sets the policy back to the step where the error occurred and runs the step. + Use the explain lifecycle state API to determine whether an index is in the ERROR step.
+ `Start the ILM plugin. + Start the index lifecycle management plugin if it is currently stopped. + ILM is started automatically when the cluster is formed. + Restarting ILM is necessary only when it has been stopped using the stop ILM API.
+ `Stop the ILM plugin. + Halt all lifecycle management operations and stop the index lifecycle management plugin. + This is useful when you are performing maintenance on the cluster and need to prevent ILM from performing any actions on your indices.
+The API returns as soon as the stop request has been acknowledged, but the plugin might continue to run until in-progress operations complete and the plugin can be safely stopped. + Use the get ILM status API to check whether ILM is running.
+ `Add an index block. + Limits the operations allowed on an index by blocking specific operation types.
+ `Get tokens from text analysis. + The analyze API performs analysis on a text string and returns the resulting tokens.
+ `Clears the caches of one or more indices. + For data streams, the API clears the caches of the stream’s backing indices.
+ `Clones an existing index.
+ `Closes an index.
+ `Create an index. + Creates a new index.
+ `Create a data stream. + Creates a data stream. + You must have a matching index template with data stream enabled.
+ `Get data stream stats. + Retrieves statistics for one or more data streams.
+ `Delete indices. + Deletes one or more indices.
+ `Delete an alias. + Removes a data stream or index from an alias.
+ `Delete data stream lifecycles. + Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.
+ `Delete data streams. + Deletes one or more data streams and their backing indices.
+ `Delete an index template. + The provided may contain multiple template names separated by a comma. If multiple template + names are specified then there is no wildcard support and the provided names should match completely with + existing templates.
+ `Deletes a legacy index template.
+ `Analyzes the disk usage of each field of an index or data stream.
+ `Aggregates a time series (TSDS) index and stores pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval.
Check indices. + Checks if one or more indices, index aliases, or data streams exist.
+ `Check aliases. + Checks if one or more data stream or index aliases exist.
+ `Check index templates. + Check whether index templates exist.
+ `Check existence of index templates. + Returns information about whether a particular index template exists.
+ `Get the status for a data stream lifecycle. + Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.
+ `Returns field usage information for each shard and field of an index.
+ `Flushes one or more data streams or indices.
+ `Performs the force merge operation on one or more indices.
+ `Get index information. + Returns information about one or more indices. For data streams, the API returns information about the + stream’s backing indices.
+ `Get aliases. + Retrieves information for one or more data stream or index aliases.
+ `Get data stream lifecycles. + Retrieves the data stream lifecycle configuration of one or more data streams.
+ `Get data streams. + Retrieves information about one or more data streams.
+ `Get mapping definitions. + Retrieves mapping definitions for one or more fields. + For data streams, the API retrieves field mappings for the stream’s backing indices.
+ `Get index templates. + Returns information about one or more index templates.
+ `Get mapping definitions. + Retrieves mapping definitions for one or more indices. + For data streams, the API retrieves mappings for the stream’s backing indices.
+ `Get index settings. + Returns setting information for one or more indices. For data streams, + returns setting information for the stream’s backing indices.
+ `Get index templates. + Retrieves information about one or more index templates.
+ `Convert an index alias to a data stream.
+ Converts an index alias to a data stream.
+ You must have a matching index template that is data stream enabled.
+ The alias must meet the following criteria:
+ The alias must have a write index;
+ All indices for the alias must have a @timestamp field mapping of a date or date_nanos field type;
+ The alias must not have any filters;
+ The alias must not use custom routing.
+ If successful, the request removes the alias and creates a data stream with the same name.
+ The indices for the alias become hidden backing indices for the stream.
+ The write index for the alias becomes the write index for the stream.
Update data streams. + Performs one or more data stream modification actions in a single atomic operation.
+ `Opens a closed index. + For data streams, the API opens any closed backing indices.
+ `Promotes a data stream from a replicated data stream managed by CCR to a regular data stream
+ `Create or update an alias. + Adds a data stream or index to an alias.
+ `Update data stream lifecycles. + Update the data stream lifecycle of the specified data streams.
+ `Create or update an index template. + Index templates define settings, mappings, and aliases that can be applied automatically to new indices.
+ `Update field mappings. + Adds new fields to an existing data stream or index. + You can also use this API to change the search settings of existing fields. + For data streams, these changes are applied to all backing indices by default.
+ `Update index settings. + Changes dynamic index settings in real time. For data streams, index setting + changes are applied to all backing indices by default.
+ `Create or update an index template. + Index templates define settings, mappings, and aliases that can be applied automatically to new indices.
+ `Returns information about ongoing and completed shard recoveries for one or more indices. + For data streams, the API returns information for the stream’s backing indices.
+ `Refresh an index. + A refresh makes recent operations performed on one or more indices available for search. + For data streams, the API runs the refresh operation on the stream’s backing indices.
+ `Reloads an index's search analyzers and their resources.
+ `Resolves the specified index expressions to return information about each cluster, including + the local cluster, if included. + Multiple patterns and remote clusters are supported.
+ `Resolve indices. + Resolve the names and/or index patterns for indices, aliases, and data streams. + Multiple patterns and remote clusters are supported.
+ `Roll over to a new index. + Creates a new index for a data stream or index alias.
+ `Returns low-level information about the Lucene segments in index shards. + For data streams, the API returns information about the stream’s backing indices.
+ `Retrieves store information about replica shards in one or more indices. + For data streams, the API retrieves store information for the stream’s backing indices.
+ `Shrinks an existing index into a new index with fewer primary shards.
+ `Simulate an index. + Returns the index configuration that would be applied to the specified index from an existing index template.
+ `Simulate an index template. + Returns the index configuration that would be applied by a particular index template.
+ `Splits an existing index into a new index with more primary shards.
+ `Returns statistics for one or more indices. + For data streams, the API retrieves statistics for the stream’s backing indices.
+ `Unfreezes an index.
+ `Create or update an alias. + Adds a data stream or index to an alias.
+ `Validate a query. + Validates a query without running it.
+ `Delete an inference endpoint
+ `Get an inference endpoint
+ `Perform inference on the service
+ `Create an inference endpoint
+ `Delete GeoIP database configurations. + Delete one or more IP geolocation database configurations.
+ `Deletes an IP location database configuration.
+ `Delete pipelines. + Delete one or more ingest pipelines.
+ `Get GeoIP statistics. + Get download statistics for GeoIP2 databases that are used with the GeoIP processor.
+ `Get GeoIP database configurations. + Get information about one or more IP geolocation database configurations.
+ `Returns information about one or more IP location database configurations.
+ `Get pipelines. + Get information about one or more ingest pipelines. + This API returns a local reference of the pipeline.
+ `Run a grok processor. + Extract structured fields out of a single text field within a document. + You must choose which field to extract matched fields from, as well as the grok pattern you expect will match. + A grok pattern is like a regular expression that supports aliased expressions that can be reused.
+ `Create or update GeoIP database configurations. + Create or update IP geolocation database configurations.
+ `Returns information about one or more IP location database configurations.
+ `Create or update a pipeline. + Changes made using this API take effect immediately.
+ `Simulate a pipeline. + Run an ingest pipeline against a set of provided documents. + You can either specify an existing pipeline to use with the provided documents or supply a pipeline definition in the body of the request.
+ `Deletes licensing information for the cluster
+ `Get license information. + Returns information about your Elastic license, including its type, its status, when it was issued, and when it expires. + For more information about the different types of licenses, refer to Elastic Stack subscriptions.
+ `Retrieves information about the status of the basic license.
+ `Retrieves information about the status of the trial license.
+ `Updates the license for the cluster.
+ `The start basic API enables you to initiate an indefinite basic license, which gives access to all the basic features. If the basic license does not support all of the features that are available with your current license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true. + To check the status of your basic license, use the following API: Get basic status.
+ `The start trial API enables you to start a 30-day trial, which gives access to all subscription features.
+ `Deletes a pipeline used for Logstash Central Management.
+ `Retrieves pipelines used for Logstash Central Management.
+ `Creates or updates a pipeline used for Logstash Central Management.
+ `Retrieves information about different cluster, node, and index level settings that use deprecated features that will be removed or changed in the next major version.
+ `Find out whether system features need to be upgraded or not
+ `Begin upgrades for system features
+ `Clear trained model deployment cache. + Cache will be cleared on all nodes where the trained model is assigned. + A trained model deployment may have an inference cache enabled. + As requests are handled by each allocated node, their responses may be cached on that individual node. + Calling this API clears the caches without restarting the deployment.
+ `Close anomaly detection jobs. + A job can be opened and closed multiple times throughout its lifecycle. A closed job cannot receive data or perform analysis operations, but you can still explore and navigate results. + When you close a job, it runs housekeeping tasks such as pruning the model history, flushing buffers, calculating final results and persisting the model snapshots. Depending upon the size of the job, it could take several minutes to close and the equivalent time to re-open. After it is closed, the job has a minimal overhead on the cluster except for maintaining its meta data. Therefore it is a best practice to close jobs that are no longer required to process data. + If you close an anomaly detection job whose datafeed is running, the request first tries to stop the datafeed. This behavior is equivalent to calling stop datafeed API with the same timeout and force parameters as the close job request. + When a datafeed that has a specified end date stops, it automatically closes its associated job.
+ `Delete a calendar. + Removes all scheduled events from a calendar, then deletes it.
+ `Delete events from a calendar.
+ `Delete anomaly jobs from a calendar.
+ `Delete a data frame analytics job.
+ `Delete a datafeed.
+ `