Skip to content

Commit 6ff081b

Browse files
Clarify searchable snapshot repository reliability (#93023)
To make it clear that repository snapshots should be available and reliable for any mounted searchable snapshots. Co-authored-by: David Turner <[email protected]>
1 parent 6acf632 commit 6ff081b

File tree

2 files changed

+93
-76
lines changed

2 files changed

+93
-76
lines changed

docs/reference/searchable-snapshots/index.asciidoc

Lines changed: 88 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@ infrequently accessed and read-only data in a very cost-effective fashion. The
66
<<cold-tier,cold>> and <<frozen-tier,frozen>> data tiers use {search-snaps} to
77
reduce your storage and operating costs.
88

9-
{search-snaps-cap} eliminate the need for <<scalability,replica shards>>
10-
after rolling over from the hot tier, potentially halving the local storage needed to search
11-
your data. {search-snaps-cap} rely on the same snapshot mechanism you already
12-
use for backups and have minimal impact on your snapshot repository storage
13-
costs.
9+
{search-snaps-cap} eliminate the need for <<scalability,replica shards>> after
10+
rolling over from the hot tier, potentially halving the local storage needed to
11+
search your data. {search-snaps-cap} rely on the same snapshot mechanism you
12+
already use for backups and have minimal impact on your snapshot repository
13+
storage costs.
1414

1515
[discrete]
1616
[[using-searchable-snapshots]]
@@ -40,9 +40,9 @@ To mount an index from a snapshot that contains multiple indices, we recommend
4040
creating a <<clone-snapshot-api, clone>> of the snapshot that contains only the
4141
index you want to search, and mounting the clone. You should not delete a
4242
snapshot if it has any mounted indices, so creating a clone enables you to
43-
manage the lifecycle of the backup snapshot independently of any
44-
{search-snaps}. If you use {ilm-init} to manage your {search-snaps} then it
45-
will automatically look after cloning the snapshot as needed.
43+
manage the lifecycle of the backup snapshot independently of any {search-snaps}.
44+
If you use {ilm-init} to manage your {search-snaps} then it will automatically
45+
look after cloning the snapshot as needed.
4646

4747
You can control the allocation of the shards of {search-snap} indices using the
4848
same mechanisms as for regular indices. For example, you could use
@@ -84,9 +84,9 @@ Use any of the following repository types with searchable snapshots:
8484
* <<snapshots-read-only-repository,Read-only HTTP and HTTPS repositories>>
8585

8686
You can also use alternative implementations of these repository types, for
87-
instance <<repository-s3-client,MinIO>>,
88-
as long as they are fully compatible. Use the <<repo-analysis-api>> API
89-
to analyze your repository's suitability for use with searchable snapshots.
87+
instance <<repository-s3-client,MinIO>>, as long as they are fully compatible.
88+
Use the <<repo-analysis-api>> API to analyze your repository's suitability for
89+
use with searchable snapshots.
9090
// end::searchable-snapshot-repo-types[]
9191

9292
[discrete]
@@ -122,40 +122,41 @@ performance characteristics and local storage footprints:
122122

123123
[[fully-mounted]]
124124
Fully mounted index::
125-
Loads a full copy of the snapshotted index's shards onto node-local storage
126-
within the cluster. {ilm-init} uses this option in the `hot` and `cold` phases.
125+
Fully caches the snapshotted index's shards in the {es} cluster. {ilm-init} uses
126+
this option in the `hot` and `cold` phases.
127127
+
128-
Search performance for a fully mounted index is normally
129-
comparable to a regular index, since there is minimal need to access the
130-
snapshot repository. While recovery is ongoing, search performance may be
131-
slower than with a regular index because a search may need some data that has
132-
not yet been retrieved into the local copy. If that happens, {es} will eagerly
133-
retrieve the data needed to complete the search in parallel with the ongoing
134-
recovery. On-disk data is preserved across restarts, such that the node does
135-
not need to re-download data that is already stored on the node after a restart.
128+
Search performance for a fully mounted index is normally comparable to a regular
129+
index, since there is minimal need to access the snapshot repository. While
130+
recovery is ongoing, search performance may be slower than with a regular index
131+
because a search may need some data that has not yet been retrieved into the
132+
local cache. If that happens, {es} will eagerly retrieve the data needed to
133+
complete the search in parallel with the ongoing recovery. On-disk data is
134+
preserved across restarts, such that the node does not need to re-download data
135+
that is already stored on the node after a restart.
136136
+
137137
Indices managed by {ilm-init} are prefixed with `restored-` when fully mounted.
138138

139139
[[partially-mounted]]
140140
Partially mounted index::
141141
Uses a local cache containing only recently searched parts of the snapshotted
142-
index's data. This cache has a fixed size and is shared across shards of partially
143-
mounted indices allocated on the same data node. {ilm-init} uses this option in the
144-
`frozen` phase.
142+
index's data. This cache has a fixed size and is shared across shards of
143+
partially mounted indices allocated on the same data node. {ilm-init} uses this
144+
option in the `frozen` phase.
145145
+
146146
If a search requires data that is not in the cache, {es} fetches the missing
147147
data from the snapshot repository. Searches that require these fetches are
148-
slower, but the fetched data is stored in the cache so that similar searches
149-
can be served more quickly in future. {es} will evict infrequently used data
150-
from the cache to free up space. The cache is cleared when a node is restarted.
148+
slower, but the fetched data is stored in the cache so that similar searches can
149+
be served more quickly in future. {es} will evict infrequently used data from
150+
the cache to free up space. The cache is cleared when a node is restarted.
151151
+
152-
Although slower than a fully mounted index or a regular index, a
153-
partially mounted index still returns search results quickly, even for
154-
large data sets, because the layout of data in the repository is heavily
155-
optimized for search. Many searches will need to retrieve only a small subset of
156-
the total shard data before returning results.
152+
Although slower than a fully mounted index or a regular index, a partially
153+
mounted index still returns search results quickly, even for large data sets,
154+
because the layout of data in the repository is heavily optimized for search.
155+
Many searches will need to retrieve only a small subset of the total shard data
156+
before returning results.
157157
+
158-
Indices managed by {ilm-init} are prefixed with `partial-` when partially mounted.
158+
Indices managed by {ilm-init} are prefixed with `partial-` when partially
159+
mounted.
159160

160161
To partially mount an index, you must have one or more nodes with a shared cache
161162
available. By default, dedicated frozen data tier nodes (nodes with the
@@ -166,16 +167,16 @@ headroom of 100GB.
166167
Using a dedicated frozen tier is highly recommended for production use. If you
167168
do not have a dedicated frozen tier, you must configure the
168169
`xpack.searchable.snapshot.shared_cache.size` setting to reserve space for the
169-
cache on one or more nodes. Partially mounted indices
170-
are only allocated to nodes that have a shared cache.
170+
cache on one or more nodes. Partially mounted indices are only allocated to
171+
nodes that have a shared cache.
171172

172173
[[searchable-snapshots-shared-cache]]
173174
`xpack.searchable.snapshot.shared_cache.size`::
174175
(<<static-cluster-setting,Static>>)
175-
Disk space reserved for the shared cache of partially mounted indices.
176-
Accepts a percentage of total disk space or an absolute <<byte-units,byte
177-
value>>. Defaults to `90%` of total disk space for dedicated frozen data tier
178-
nodes. Otherwise defaults to `0b`.
176+
Disk space reserved for the shared cache of partially mounted indices. Accepts a
177+
percentage of total disk space or an absolute <<byte-units,byte value>>.
178+
Defaults to `90%` of total disk space for dedicated frozen data tier nodes.
179+
Otherwise defaults to `0b`.
179180

180181
`xpack.searchable.snapshot.shared_cache.size.max_headroom`::
181182
(<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
@@ -189,8 +190,9 @@ To illustrate how these settings work in concert let us look at two examples
189190
when using the default values of the settings on a dedicated frozen node:
190191

191192
* A 4000 GB disk will result in a shared cache sized at 3900 GB. 90% of 4000 GB
192-
is 3600 GB, leaving 400 GB headroom. The default `max_headroom` of 100 GB
193-
takes effect, and the result is therefore 3900 GB.
193+
is 3600 GB, leaving 400 GB headroom. The default `max_headroom` of 100 GB takes
194+
effect, and the result is therefore 3900 GB.
195+
194196
* A 400 GB disk will result in a shared cache sized at 360 GB.
195197

196198
You can configure the settings in `elasticsearch.yml`:
@@ -201,20 +203,20 @@ xpack.searchable.snapshot.shared_cache.size: 4TB
201203
----
202204

203205
IMPORTANT: You can only configure these settings on nodes with the
204-
<<data-frozen-node,`data_frozen`>> role. Additionally, nodes with a shared
205-
cache can only have a single <<path-settings,data path>>.
206+
<<data-frozen-node,`data_frozen`>> role. Additionally, nodes with a shared cache
207+
can only have a single <<path-settings,data path>>.
206208

207-
{es} also uses a dedicated system index named `.snapshot-blob-cache` to speed
208-
up the recoveries of {search-snap} shards. This index is used as an additional
209+
{es} also uses a dedicated system index named `.snapshot-blob-cache` to speed up
210+
the recoveries of {search-snap} shards. This index is used as an additional
209211
caching layer on top of the partially or fully mounted data and contains the
210212
minimal required data to start the {search-snap} shards. {es} automatically
211-
deletes the documents that are no longer used in this index. This periodic
212-
clean up can be tuned using the following settings:
213+
deletes the documents that are no longer used in this index. This periodic clean
214+
up can be tuned using the following settings:
213215

214216
`searchable_snapshots.blob_cache.periodic_cleanup.interval`::
215217
(<<dynamic-cluster-setting,Dynamic>>)
216-
The interval at which the periodic cleanup of the `.snapshot-blob-cache`
217-
index is scheduled. Defaults to every hour (`1h`).
218+
The interval at which the periodic cleanup of the `.snapshot-blob-cache` index
219+
is scheduled. Defaults to every hour (`1h`).
218220

219221
`searchable_snapshots.blob_cache.periodic_cleanup.retention_period`::
220222
(<<dynamic-cluster-setting,Dynamic>>)
@@ -237,10 +239,10 @@ index. Defaults to `10m`.
237239
=== Reduce costs with {search-snaps}
238240

239241
In most cases, {search-snaps} reduce the costs of running a cluster by removing
240-
the need for replica shards and for shard data to be copied between
241-
nodes. However, if it's particularly expensive to retrieve data from a snapshot
242-
repository in your environment, {search-snaps} may be more costly than
243-
regular indices. Ensure that the cost structure of your operating environment is
242+
the need for replica shards and for shard data to be copied between nodes.
243+
However, if it's particularly expensive to retrieve data from a snapshot
244+
repository in your environment, {search-snaps} may be more costly than regular
245+
indices. Ensure that the cost structure of your operating environment is
244246
compatible with {search-snaps} before using them.
245247

246248
[discrete]
@@ -250,7 +252,7 @@ compatible with {search-snaps} before using them.
250252
For resiliency, a regular index requires multiple redundant copies of each shard
251253
across multiple nodes. If a node fails, {es} uses the redundancy to rebuild any
252254
lost shard copies. A {search-snap} index doesn't require replicas. If a node
253-
containing a {search-snap} index fails, {es} can rebuild the lost shard copy
255+
containing a {search-snap} index fails, {es} can rebuild the lost shard cache
254256
from the snapshot repository.
255257

256258
Without replicas, rarely-accessed {search-snap} indices require far fewer
@@ -264,11 +266,11 @@ only partially-mounted {search-snap} indices, requires even fewer resources.
264266
==== Data transfer costs
265267

266268
When a shard of a regular index is moved between nodes, its contents are copied
267-
from another node in your cluster. In many environments, the costs of moving data
268-
between nodes are significant, especially if running in a Cloud environment with
269-
nodes in different zones. In contrast, when mounting a {search-snap} index or
270-
moving one of its shards, the data is always copied from the snapshot repository.
271-
This is typically much cheaper.
269+
from another node in your cluster. In many environments, the costs of moving
270+
data between nodes are significant, especially if running in a Cloud environment
271+
with nodes in different zones. In contrast, when mounting a {search-snap} index
272+
or moving one of its shards, the data is always copied from the snapshot
273+
repository. This is typically much cheaper.
272274

273275
WARNING: Most cloud providers charge significant fees for data transferred
274276
between regions and for data transferred out of their platforms. You should only
@@ -281,37 +283,49 @@ multiple clusters and use <<modules-cross-cluster-search,{ccs}>> or
281283
[[back-up-restore-searchable-snapshots]]
282284
=== Back up and restore {search-snaps}
283285

284-
You can use <<snapshots-take-snapshot,regular snapshots>> to back up a
285-
cluster containing {search-snap} indices. When you restore a snapshot
286-
containing {search-snap} indices, these indices are restored as {search-snap}
287-
indices again.
286+
You can use <<snapshots-take-snapshot,regular snapshots>> to back up a cluster
287+
containing {search-snap} indices. When you restore a snapshot containing
288+
{search-snap} indices, these indices are restored as {search-snap} indices
289+
again.
288290

289291
Before you restore a snapshot containing a {search-snap} index, you must first
290292
<<snapshots-register-repository,register the repository>> containing the
291293
original index snapshot. When restored, the {search-snap} index mounts the
292-
original index snapshot from its original repository. If wanted, you
293-
can use separate repositories for regular snapshots and {search-snaps}.
294+
original index snapshot from its original repository. If wanted, you can use
295+
separate repositories for regular snapshots and {search-snaps}.
294296

295297
A snapshot of a {search-snap} index contains only a small amount of metadata
296298
which identifies its original index snapshot. It does not contain any data from
297299
the original index. The restore of a backup will fail to restore any
298300
{search-snap} indices whose original index snapshot is unavailable.
299301

300-
Because {search-snap} indices are not regular indices, it is not possible to
301-
use a <<snapshots-source-only-repository,source-only repository>> to take
302-
snapshots of {search-snap} indices.
302+
Because {search-snap} indices are not regular indices, it is not possible to use
303+
a <<snapshots-source-only-repository,source-only repository>> to take snapshots
304+
of {search-snap} indices.
303305

304306
[discrete]
305307
[[searchable-snapshots-reliability]]
306308
=== Reliability of {search-snaps}
307309

308310
The sole copy of the data in a {search-snap} index is the underlying snapshot,
309-
stored in the repository. If the repository fails or corrupts the contents of
310-
the snapshot then the data is lost. Although {es} may have made copies of the
311-
data onto local storage, these copies may be incomplete and cannot be used to
312-
recover any data after a repository failure. You must make sure that your
313-
repository is reliable and protects against corruption of your data while it is
314-
at rest in the repository.
311+
stored in the repository. For example:
312+
313+
* You cannot unregister a repository while any of the searchable snapshots it
314+
contains are mounted in {es}. You also cannot delete a snapshot if any of its
315+
indices are mounted as a searchable snapshot in the same cluster.
316+
317+
* If you mount indices from snapshots held in a repository to which a different
318+
cluster has write access then you must make sure that the other cluster does not
319+
delete these snapshots.
320+
321+
* If you delete a snapshot while it is mounted as a searchable snapshot then the
322+
data is lost. Similarly, if the repository fails or corrupts the contents of the
323+
snapshot then the data is lost.
324+
325+
* Although {es} may have cached the data onto local storage, these caches may be
326+
incomplete and cannot be used to recover any data after a repository failure.
327+
You must make sure that your repository is reliable and protects against
328+
corruption of your data while it is at rest in the repository.
315329

316330
The blob storage offered by all major public cloud providers typically offers
317331
very good protection against data loss or corruption. If you manage your own

docs/reference/snapshot-restore/index.asciidoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -199,15 +199,18 @@ contents of the repository then future snapshot or restore operations may fail,
199199
reporting corruption or other data inconsistencies, or may appear to succeed
200200
having silently lost some of your data.
201201

202-
You may however safely <<snapshots-repository-backup,restore a repository from
203-
a backup>> as long as
202+
You may however safely <<snapshots-repository-backup,restore a repository from a
203+
backup>> as long as
204204

205205
. The repository is not registered with {es} while you are restoring its
206206
contents.
207207

208208
. When you have finished restoring the repository its contents are exactly as
209209
they were when you took the backup.
210210

211+
If you no longer need any of the snapshots in a repository, unregister it from
212+
{es} before deleting its contents from the underlying storage.
213+
211214
Additionally, snapshots may contain security-sensitive information, which you
212215
may wish to <<cluster-state-snapshots,store in a dedicated repository>>.
213216

0 commit comments

Comments
 (0)