Skip to content

Commit c32024b

Browse files
author
Adam Locke
authored
[7.8] [DOCS] Updating snapshot/restore pages to align with API changes
1 parent 69c0e7c commit c32024b

File tree

6 files changed

+234
-268
lines changed

6 files changed

+234
-268
lines changed

docs/reference/snapshot-restore/apis/get-snapshot-status-api.asciidoc

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -88,9 +88,7 @@ Use the get snapshot status API to retrieve detailed information about snapshots
8888

8989
If you specify both the repository name and snapshot, the request retrieves detailed status information for the given snapshot, even if not currently running.
9090

91-
WARNING: Using this API to return any status results other than the currently running snapshots (`_current`) can be very expensive. Each request to retrieve snapshot status results in file reads from every shard in a snapshot, for each snapshot.
92-
+
93-
For example, if you have 100 snapshots with 1,000 shards each, the API request will result in 100,000 file reads (100 snapshots * 1,000 shards). Depending on the latency of your file storage, the request can take extremely long to retrieve results.
91+
include::{es-ref-dir}/snapshot-restore/monitor-snapshot-restore.asciidoc[tag=get-snapshot-status-warning]
9492

9593
[[get-snapshot-status-api-path-params]]
9694
==== {api-path-parms-title}
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
[[delete-snapshots]]
2+
== Delete a snapshot
3+
4+
////
5+
[source,console]
6+
-----------------------------------
7+
PUT /_snapshot/my_backup
8+
{
9+
"type": "fs",
10+
"settings": {
11+
"location": "my_backup_location"
12+
}
13+
}
14+
15+
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
16+
17+
PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
18+
19+
PUT /_snapshot/my_backup/snapshot_3?wait_for_completion=true
20+
-----------------------------------
21+
// TESTSETUP
22+
23+
////
24+
25+
Use the <<delete-snapshot-api,delete snapshot API>> to delete a snapshot
26+
from the repository:
27+
28+
[source,console]
29+
----
30+
DELETE /_snapshot/my_backup/snapshot_1
31+
----
32+
33+
When a snapshot is deleted from a repository, {es} deletes all files associated with the
34+
snapshot that are not in-use by other snapshots.
35+
36+
If the delete snapshot operation starts while the snapshot is being
37+
created, the snapshot process halts and all files created as part of the snapshotting process are
38+
removed. Use the <<delete-snapshot-api,Delete snapshot API>> to cancel long running snapshot operations that were
39+
started by mistake.
40+
41+
To delete multiple snapshots from a repository, separate snapshot names by commas or use wildcards:
42+
43+
[source,console]
44+
-----------------------------------
45+
DELETE /_snapshot/my_backup/snapshot_2,snapshot_3
46+
DELETE /_snapshot/my_backup/snap*
47+
-----------------------------------

docs/reference/snapshot-restore/index.asciidoc

Lines changed: 16 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -5,25 +5,29 @@
55
--
66

77
// tag::snapshot-intro[]
8-
A _snapshot_ is a backup taken from a running {es} cluster.
9-
You can take snapshots of individual indices or of the entire cluster.
10-
Snapshots can be stored in either local or remote repositories.
11-
Remote repositories can reside on S3, HDFS, Azure, Google Cloud Storage,
8+
A _snapshot_ is a backup taken from a running {es} cluster.
9+
You can take snapshots of an entire cluster, including all its
10+
indices. You can also take snapshots of only specific indices in
11+
the cluster.
12+
13+
Snapshots can be stored in either local or remote repositories.
14+
Remote repositories can reside on Amazon S3, HDFS, Microsoft Azure,
15+
Google Cloud Storage,
1216
and other platforms supported by a repository plugin.
1317

14-
Snapshots are incremental: each snapshot of an index only stores data that
18+
Snapshots are incremental: each snapshot of an index only stores data that
1519
is not part of an earlier snapshot.
1620
This enables you to take frequent snapshots with minimal overhead.
17-
// end::snapshot-intro[]
21+
// end::snapshot-intro[]
1822

1923
// tag::restore-intro[]
20-
You can restore snapshots to a running cluster with the <<snapshots-restore-snapshot,restore API>>.
21-
By default, all indices in the snapshot are restored.
22-
Alternatively, you can restore specific indices or restore the cluster state from a snapshot.
23-
When restoring indices, you can modify the index name and selected index settings.
24+
You can restore snapshots to a running cluster with the <<snapshots-restore-snapshot,restore API>>.
25+
By default, all indices in the snapshot are restored.
26+
Alternatively, you can restore specific indices or restore the cluster state from a snapshot.
27+
When restoring indices, you can modify the index name and selected index settings.
2428
// end::restore-intro[]
2529

26-
You must <<snapshots-register-repository, register a snapshot repository>>
30+
You must <<snapshots-register-repository, register a snapshot repository>>
2731
before you can <<snapshots-take-snapshot, take snapshots>>.
2832

2933
You can use <<getting-started-snapshot-lifecycle-management, snapshot lifecycle management>>
@@ -92,5 +96,5 @@ include::register-repository.asciidoc[]
9296
include::take-snapshot.asciidoc[]
9397
include::restore-snapshot.asciidoc[]
9498
include::monitor-snapshot-restore.asciidoc[]
99+
include::delete-snapshot.asciidoc[]
95100
include::../slm/index.asciidoc[]
96-
Lines changed: 119 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
11
[[snapshots-monitor-snapshot-restore]]
22
== Monitor snapshot and restore progress
3-
43
++++
54
<titleabbrev>Monitor snapshot and restore</titleabbrev>
65
++++
76

8-
There are several ways to monitor the progress of the snapshot and restore processes while they are running. Both
9-
operations support `wait_for_completion` parameter that would block client until the operation is completed. This is
10-
the simplest method that can be used to get notified about operation completion.
7+
Use the <<get-snapshot-api,get snapshot API>> or the
8+
<<get-snapshot-status-api,get snapshot status API>> to monitor the
9+
progress of snapshot operations. Both APIs support the
10+
`wait_for_completion` parameter that blocks the client until the
11+
operation finishes, which is the simplest method of being notified
12+
about operation completion.
1113

1214
////
1315
[source,console]
@@ -20,70 +22,155 @@ PUT /_snapshot/my_backup
2022
}
2123
}
2224
25+
PUT /_snapshot/my_fs_backup
26+
{
27+
"type": "fs",
28+
"settings": {
29+
"location": "my_other_backup_location"
30+
}
31+
}
32+
2333
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
34+
35+
PUT /_snapshot/my_backup/some_other_snapshot?wait_for_completion=true
2436
-----------------------------------
2537
// TESTSETUP
2638
2739
////
2840

29-
The snapshot operation can be also monitored by periodic calls to the snapshot info:
41+
Use the `_current` parameter to retrieve all currently running
42+
snapshots in the cluster:
43+
44+
[source,console]
45+
-----------------------------------
46+
GET /_snapshot/my_backup/_current
47+
-----------------------------------
48+
49+
Including a snapshot name in the request retrieves information about a single snapshot:
3050

3151
[source,console]
3252
-----------------------------------
3353
GET /_snapshot/my_backup/snapshot_1
3454
-----------------------------------
3555

36-
Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
37-
executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
38-
for available resources before returning the result. On very large shards the wait time can be significant.
56+
This request retrieves basic information about the snapshot, including start and end time, version of
57+
{es} that created the snapshot, the list of included indices, the current state of the
58+
snapshot and the list of failures that occurred during the snapshot.
3959

40-
To get more immediate and complete information about snapshots the snapshot status command can be used instead:
60+
Similar to repositories, you can retrieve information about multiple snapshots in a single request, and wildcards are supported:
4161

4262
[source,console]
4363
-----------------------------------
44-
GET /_snapshot/my_backup/snapshot_1/_status
64+
GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
65+
-----------------------------------
66+
67+
Separate repository names with commas or use wildcards to retrieve snapshots from multiple repositories:
68+
69+
[source,console]
70+
-----------------------------------
71+
GET /_snapshot/_all
72+
GET /_snapshot/my_backup,my_fs_backup
73+
GET /_snapshot/my*
74+
-----------------------------------
75+
76+
Add the `_all` parameter to the request to list all snapshots currently stored in the repository:
77+
78+
[source,console]
79+
-----------------------------------
80+
GET /_snapshot/my_backup/_all
4581
-----------------------------------
46-
// TEST[continued]
4782

48-
While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
83+
This request fails if some of the snapshots are unavailable. Use the boolean parameter `ignore_unavailable` to
84+
return all snapshots that are currently available.
85+
86+
Getting all snapshots in the repository can be costly on cloud-based repositories,
87+
both from a cost and performance perspective. If the only information required is
88+
the snapshot names or UUIDs in the repository and the indices in each snapshot, then
89+
the optional boolean parameter `verbose` can be set to `false` to execute a more
90+
performant and cost-effective retrieval of the snapshots in the repository.
91+
92+
NOTE: Setting `verbose` to `false` omits additional information
93+
about the snapshot, such as metadata, start and end time, number of shards that include the snapshot, and error messages. The default value of the `verbose` parameter is `true`.
94+
95+
[discrete]
96+
[[get-snapshot-detailed-status]]
97+
=== Retrieving snapshot status
98+
To retrieve more detailed information about snapshots, use the <<get-snapshot-status-api,get snapshot status API>>. While snapshot request returns only basic information about the snapshot in progress, the snapshot status request returns
4999
complete breakdown of the current state for each shard participating in the snapshot.
50100

51-
The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery
52-
monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster
53-
typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the
54-
restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster
55-
state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that
56-
creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas
57-
are created, the cluster switches to the `green` states.
101+
// tag::get-snapshot-status-warning[]
102+
[WARNING]
103+
====
104+
Using the get snapshot status API to return any status results other than the currently running snapshots (`_current`) can be very expensive. Each request to retrieve snapshot status results in file reads from every shard in a snapshot, for each snapshot. Such requests are taxing to machine resources and can also incur high processing costs when running in the cloud.
105+
106+
For example, if you have 100 snapshots with 1,000 shards each, the API request will result in 100,000 file reads (100 snapshots * 1,000 shards). Depending on the latency of your file storage, the request can take extremely long to retrieve results.
107+
====
108+
// end::get-snapshot-status-warning[]
109+
110+
The following request retrieves all currently running snapshots with
111+
detailed status information:
112+
113+
[source,console]
114+
-----------------------------------
115+
GET /_snapshot/_status
116+
-----------------------------------
117+
118+
By specifying a repository name, it's possible
119+
to limit the results to a particular repository:
120+
121+
[source,console]
122+
-----------------------------------
123+
GET /_snapshot/my_backup/_status
124+
-----------------------------------
125+
126+
If both repository name and snapshot name are specified, the request
127+
returns detailed status information for the given snapshot, even
128+
if not currently running:
129+
130+
[source,console]
131+
-----------------------------------
132+
GET /_snapshot/my_backup/snapshot_1/_status
133+
-----------------------------------
134+
135+
[discrete]
136+
=== Monitoring restore operations
137+
138+
The restore process piggybacks on the standard recovery mechanism of
139+
{es}. As a result, standard recovery monitoring services can be used
140+
to monitor the state of restore. When the restore operation starts, the
141+
cluster typically goes into `yellow` state because the restore operation works
142+
by recovering primary shards of the restored indices. After the recovery of the
143+
primary shards is completed, {es} switches to the standard replication
144+
process that creates the required number of replicas. When all required
145+
replicas are created, the cluster switches to the `green` states.
58146

59147
The cluster health operation provides only a high level status of the restore process. It's possible to get more
60148
detailed insight into the current state of the recovery process by using <<indices-recovery, index recovery>> and
61149
<<cat-recovery, cat recovery>> APIs.
62150

63-
[float]
151+
[discrete]
152+
[[get-snapshot-stop-snapshot]]
64153
=== Stop snapshot and restore operations
65-
66154
The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently
67-
running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshot delete operation.
68-
The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops
155+
running snapshot was started by mistake, or takes unusually long, it can be stopped using the <<delete-snapshot-api,delete snapshot API>>.
156+
This operation checks whether the deleted snapshot is currently running. If it is, the delete snapshot operation stops
69157
that snapshot before deleting the snapshot data from the repository.
70158

71159
[source,console]
72160
-----------------------------------
73161
DELETE /_snapshot/my_backup/snapshot_1
74162
-----------------------------------
75-
// TEST[continued]
76163

77164
The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can
78-
be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed
165+
be canceled by deleting indices that are being restored. Data for all deleted indices will be removed
79166
from the cluster as a result of this operation.
80167

81-
[float]
82-
=== Effect of cluster blocks on snapshot and restore
83-
168+
[discrete]
169+
[[get-snapshot-cluster-blocks]]
170+
=== Effect of cluster blocks on snapshot and restore
84171
Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering
85-
repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as
86-
well as the global metadata were readable. The restore operation requires the global metadata to be writable, however
172+
repositories require global metadata write access. The snapshot operation requires that all indices, backing indices, and their metadata (including
173+
global metadata) are readable. The restore operation requires the global metadata to be writable. However,
87174
the index level blocks are ignored during restore because indices are essentially recreated during restore.
88-
Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal
89-
repository operations such as listing or deleting snapshots from an already registered repository.
175+
A repository content is not part of the cluster and therefore cluster blocks do not affect internal
176+
repository operations such as listing or deleting snapshots from an already registered repository.

0 commit comments

Comments
 (0)