You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/snapshot-restore/apis/get-snapshot-status-api.asciidoc
+1-3Lines changed: 1 addition & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,9 +88,7 @@ Use the get snapshot status API to retrieve detailed information about snapshots
88
88
89
89
If you specify both the repository name and snapshot, the request retrieves detailed status information for the given snapshot, even if not currently running.
90
90
91
-
WARNING: Using this API to return any status results other than the currently running snapshots (`_current`) can be very expensive. Each request to retrieve snapshot status results in file reads from every shard in a snapshot, for each snapshot.
92
-
+
93
-
For example, if you have 100 snapshots with 1,000 shards each, the API request will result in 100,000 file reads (100 snapshots * 1,000 shards). Depending on the latency of your file storage, the request can take extremely long to retrieve results.
<titleabbrev>Monitor snapshot and restore</titleabbrev>
6
5
++++
7
6
8
-
There are several ways to monitor the progress of the snapshot and restore processes while they are running. Both
9
-
operations support `wait_for_completion` parameter that would block client until the operation is completed. This is
10
-
the simplest method that can be used to get notified about operation completion.
7
+
Use the <<get-snapshot-api,get snapshot API>> or the
8
+
<<get-snapshot-status-api,get snapshot status API>> to monitor the
9
+
progress of snapshot operations. Both APIs support the
10
+
`wait_for_completion` parameter that blocks the client until the
11
+
operation finishes, which is the simplest method of being notified
12
+
about operation completion.
11
13
12
14
////
13
15
[source,console]
@@ -20,70 +22,155 @@ PUT /_snapshot/my_backup
20
22
}
21
23
}
22
24
25
+
PUT /_snapshot/my_fs_backup
26
+
{
27
+
"type": "fs",
28
+
"settings": {
29
+
"location": "my_other_backup_location"
30
+
}
31
+
}
32
+
23
33
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
34
+
35
+
PUT /_snapshot/my_backup/some_other_snapshot?wait_for_completion=true
24
36
-----------------------------------
25
37
// TESTSETUP
26
38
27
39
////
28
40
29
-
The snapshot operation can be also monitored by periodic calls to the snapshot info:
41
+
Use the `_current` parameter to retrieve all currently running
42
+
snapshots in the cluster:
43
+
44
+
[source,console]
45
+
-----------------------------------
46
+
GET /_snapshot/my_backup/_current
47
+
-----------------------------------
48
+
49
+
Including a snapshot name in the request retrieves information about a single snapshot:
30
50
31
51
[source,console]
32
52
-----------------------------------
33
53
GET /_snapshot/my_backup/snapshot_1
34
54
-----------------------------------
35
55
36
-
Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
37
-
executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
38
-
for available resources before returning the result. On very large shards the wait time can be significant.
56
+
This request retrieves basic information about the snapshot, including start and end time, version of
57
+
{es} that created the snapshot, the list of included indices, the current state of the
58
+
snapshot and the list of failures that occurred during the snapshot.
39
59
40
-
To get more immediate and complete information about snapshots the snapshot status command can be used instead:
60
+
Similar to repositories, you can retrieve information about multiple snapshots in a single request, and wildcards are supported:
41
61
42
62
[source,console]
43
63
-----------------------------------
44
-
GET /_snapshot/my_backup/snapshot_1/_status
64
+
GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
65
+
-----------------------------------
66
+
67
+
Separate repository names with commas or use wildcards to retrieve snapshots from multiple repositories:
68
+
69
+
[source,console]
70
+
-----------------------------------
71
+
GET /_snapshot/_all
72
+
GET /_snapshot/my_backup,my_fs_backup
73
+
GET /_snapshot/my*
74
+
-----------------------------------
75
+
76
+
Add the `_all` parameter to the request to list all snapshots currently stored in the repository:
77
+
78
+
[source,console]
79
+
-----------------------------------
80
+
GET /_snapshot/my_backup/_all
45
81
-----------------------------------
46
-
// TEST[continued]
47
82
48
-
While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
83
+
This request fails if some of the snapshots are unavailable. Use the boolean parameter `ignore_unavailable` to
84
+
return all snapshots that are currently available.
85
+
86
+
Getting all snapshots in the repository can be costly on cloud-based repositories,
87
+
both from a cost and performance perspective. If the only information required is
88
+
the snapshot names or UUIDs in the repository and the indices in each snapshot, then
89
+
the optional boolean parameter `verbose` can be set to `false` to execute a more
90
+
performant and cost-effective retrieval of the snapshots in the repository.
91
+
92
+
NOTE: Setting `verbose` to `false` omits additional information
93
+
about the snapshot, such as metadata, start and end time, number of shards that include the snapshot, and error messages. The default value of the `verbose` parameter is `true`.
94
+
95
+
[discrete]
96
+
[[get-snapshot-detailed-status]]
97
+
=== Retrieving snapshot status
98
+
To retrieve more detailed information about snapshots, use the <<get-snapshot-status-api,get snapshot status API>>. While snapshot request returns only basic information about the snapshot in progress, the snapshot status request returns
49
99
complete breakdown of the current state for each shard participating in the snapshot.
50
100
51
-
The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery
52
-
monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster
53
-
typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the
54
-
restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster
55
-
state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that
56
-
creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas
57
-
are created, the cluster switches to the `green` states.
101
+
// tag::get-snapshot-status-warning[]
102
+
[WARNING]
103
+
====
104
+
Using the get snapshot status API to return any status results other than the currently running snapshots (`_current`) can be very expensive. Each request to retrieve snapshot status results in file reads from every shard in a snapshot, for each snapshot. Such requests are taxing to machine resources and can also incur high processing costs when running in the cloud.
105
+
106
+
For example, if you have 100 snapshots with 1,000 shards each, the API request will result in 100,000 file reads (100 snapshots * 1,000 shards). Depending on the latency of your file storage, the request can take extremely long to retrieve results.
107
+
====
108
+
// end::get-snapshot-status-warning[]
109
+
110
+
The following request retrieves all currently running snapshots with
111
+
detailed status information:
112
+
113
+
[source,console]
114
+
-----------------------------------
115
+
GET /_snapshot/_status
116
+
-----------------------------------
117
+
118
+
By specifying a repository name, it's possible
119
+
to limit the results to a particular repository:
120
+
121
+
[source,console]
122
+
-----------------------------------
123
+
GET /_snapshot/my_backup/_status
124
+
-----------------------------------
125
+
126
+
If both repository name and snapshot name are specified, the request
127
+
returns detailed status information for the given snapshot, even
128
+
if not currently running:
129
+
130
+
[source,console]
131
+
-----------------------------------
132
+
GET /_snapshot/my_backup/snapshot_1/_status
133
+
-----------------------------------
134
+
135
+
[discrete]
136
+
=== Monitoring restore operations
137
+
138
+
The restore process piggybacks on the standard recovery mechanism of
139
+
{es}. As a result, standard recovery monitoring services can be used
140
+
to monitor the state of restore. When the restore operation starts, the
141
+
cluster typically goes into `yellow` state because the restore operation works
142
+
by recovering primary shards of the restored indices. After the recovery of the
143
+
primary shards is completed, {es} switches to the standard replication
144
+
process that creates the required number of replicas. When all required
145
+
replicas are created, the cluster switches to the `green` states.
58
146
59
147
The cluster health operation provides only a high level status of the restore process. It's possible to get more
60
148
detailed insight into the current state of the recovery process by using <<indices-recovery, index recovery>> and
61
149
<<cat-recovery, cat recovery>> APIs.
62
150
63
-
[float]
151
+
[discrete]
152
+
[[get-snapshot-stop-snapshot]]
64
153
=== Stop snapshot and restore operations
65
-
66
154
The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently
67
-
running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshotdelete operation.
68
-
The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops
155
+
running snapshot was started by mistake, or takes unusually long, it can be stopped using the <<delete-snapshot-api,delete snapshot API>>.
156
+
This operation checks whether the deleted snapshot is currently running. If it is, the delete snapshot operation stops
69
157
that snapshot before deleting the snapshot data from the repository.
70
158
71
159
[source,console]
72
160
-----------------------------------
73
161
DELETE /_snapshot/my_backup/snapshot_1
74
162
-----------------------------------
75
-
// TEST[continued]
76
163
77
164
The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can
78
-
be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed
165
+
be canceled by deleting indices that are being restored. Data for all deleted indices will be removed
79
166
from the cluster as a result of this operation.
80
167
81
-
[float]
82
-
=== Effect of clusterblocks on snapshot and restore
83
-
168
+
[discrete]
169
+
[[get-snapshot-cluster-blocks]]
170
+
=== Effect of cluster blocks on snapshot and restore
84
171
Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering
85
-
repositories require write global metadata access. The snapshot operation requires that all indicesand their metadata as
86
-
well as the global metadata were readable. The restore operation requires the global metadata to be writable, however
172
+
repositories require global metadata write access. The snapshot operation requires that all indices, backing indices, and their metadata (including
173
+
global metadata) are readable. The restore operation requires the global metadata to be writable. However,
87
174
the index level blocks are ignored during restore because indices are essentially recreated during restore.
88
-
Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal
89
-
repository operations such as listing or deleting snapshots from an already registered repository.
175
+
A repository content is not part of the cluster and therefore cluster blocks do not affect internal
176
+
repository operations such as listing or deleting snapshots from an already registered repository.
0 commit comments