You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Each original primary shard is cloned into a new primary shard in the new index.
31
+
*
32
+
* IMPORTANT: Elasticsearch does not apply index templates to the resulting index.
33
+
* The API also does not copy index metadata from the original index.
34
+
* Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information.
35
+
* For example, if you clone a CCR follower index, the resulting clone will not be a follower index.
36
+
*
37
+
* The clone API copies most index settings from the source index to the resulting index, with the exception of `index.number_of_replicas` and `index.auto_expand_replicas`.
38
+
* To set the number of replicas in the resulting index, configure these settings in the clone request.
39
+
*
40
+
* Cloning works as follows:
41
+
*
42
+
* * First, it creates a new target index with the same definition as the source index.
43
+
* * Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
44
+
* * Finally, it recovers the target index as though it were a closed index which had just been re-opened.
45
+
*
46
+
* IMPORTANT: Indices can only be cloned if they meet the following requirements:
47
+
*
48
+
* * The target index must not exist.
49
+
* * The source index must have the same number of primary shards as the target index.
50
+
* * The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.
* A closed index is blocked for read or write operations and does not allow all operations that opened indices allow.
27
+
* It is not possible to index documents or to search for documents in a closed index.
28
+
* Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.
29
+
*
30
+
* When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index.
31
+
* The shards will then go through the normal recovery process.
32
+
* The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
33
+
*
34
+
* You can open and close multiple indices.
35
+
* An error is thrown if the request explicitly refers to a missing index.
36
+
* This behaviour can be turned off using the `ignore_unavailable=true` parameter.
37
+
*
38
+
* By default, you must explicitly name the indices you are opening or closing.
39
+
* To open or close indices with `_all`, `*`, or other wildcard expressions, change the` action.destructive_requires_name` setting to `false`. This setting can also be changed with the cluster update settings API.
40
+
*
41
+
* Closed indices consume a significant amount of disk-space which can cause problems in managed environments.
42
+
* Closing indices can be turned off with the cluster settings API by setting `cluster.indices.close.enable` to `false`.
Copy file name to clipboardExpand all lines: specification/indices/downsample/Request.ts
+8-1Lines changed: 8 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,14 @@ import { RequestBase } from '@_types/Base'
22
22
import{IndexName}from'@_types/common'
23
23
24
24
/**
25
-
* Aggregates a time series (TSDS) index and stores pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval.
25
+
* Downsample an index.
26
+
* Aggregate a time series (TSDS) index and store pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval.
27
+
* For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index.
28
+
* All documents within an hour interval are summarized and stored as a single document in the downsample index.
29
+
*
30
+
* NOTE: Only indices in a time series data stream are supported.
31
+
* Neither field nor document level security can be defined on the source index.
32
+
* The source index must be read only (`index.blocks.write: true`).
* Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index.
26
+
* When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart.
27
+
* Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.
28
+
*
29
+
* After each operation has been flushed it is permanently stored in the Lucene index.
30
+
* This may mean that there is no need to maintain an additional copy of it in the transaction log.
31
+
* The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
32
+
*
33
+
* It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly.
34
+
* If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
Copy file name to clipboardExpand all lines: specification/indices/forcemerge/IndicesForceMergeRequest.ts
+14Lines changed: 14 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -22,9 +22,23 @@ import { ExpandWildcards, Indices } from '@_types/common'
22
22
import{long}from'@_types/Numeric'
23
23
24
24
/**
25
+
* Force a merge.
26
+
* Perform the force merge operation on the shards of one or more indices.
27
+
* For data streams, the API forces a merge on the shards of the stream's backing indices.
28
+
*
29
+
* Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents.
30
+
* Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
31
+
*
32
+
* WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes).
33
+
* When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone".
34
+
* These soft-deleted documents are automatically cleaned up during regular segment merges.
35
+
* But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges.
36
+
* So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance.
37
+
* If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.
Copy file name to clipboardExpand all lines: specification/indices/promote_data_stream/IndicesPromoteDataStreamRequest.ts
+11Lines changed: 11 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -22,6 +22,17 @@ import { IndexName } from '@_types/common'
22
22
import{Duration}from'@_types/Time'
23
23
24
24
/**
25
+
* Promote a data stream.
26
+
* Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.
27
+
*
28
+
* With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster.
29
+
* These data streams can't be rolled over in the local cluster.
30
+
* These replicated data streams roll over only if the upstream data stream rolls over.
31
+
* In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
32
+
*
33
+
* NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream.
34
+
* If this is missing, the data stream will not be able to roll over until a matching index template is created.
35
+
* This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
0 commit comments