Skip to content

Commit 9ea8ea3

Browse files
authored
[DOCS] Document dynamic cluster-lvl shard alloc settings (#61338) (#61736)
1 parent 17be307 commit 9ea8ea3

File tree

6 files changed

+42
-45
lines changed

6 files changed

+42
-45
lines changed

docs/reference/modules/cluster.asciidoc

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,6 @@ There are a number of settings available to control the shard allocation process
2525

2626
Besides these, there are a few other <<misc-cluster-settings,miscellaneous cluster-level settings>>.
2727

28-
All of these settings are _dynamic_ and can be
29-
updated on a live cluster with the
30-
<<cluster-update-settings,cluster-update-settings>> API.
31-
3228
include::cluster/shards_allocation.asciidoc[]
3329

3430
include::cluster/disk_allocator.asciidoc[]

docs/reference/modules/cluster/allocation_awareness.asciidoc

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,11 @@ in the same zone, it can distribute the primary shard and its replica shards to
88
minimise the risk of losing all shard copies in the event of a failure.
99

1010
When shard allocation awareness is enabled with the
11+
<<dynamic-cluster-setting,dynamic>>
1112
`cluster.routing.allocation.awareness.attributes` setting, shards are only
12-
allocated to nodes that have values set for the specified awareness
13-
attributes. If you use multiple awareness attributes, {es} considers
14-
each attribute separately when allocating shards.
15-
16-
The allocation awareness settings can be configured in
17-
`elasticsearch.yml` and updated dynamically with the
18-
<<cluster-update-settings,cluster-update-settings>> API.
13+
allocated to nodes that have values set for the specified awareness attributes.
14+
If you use multiple awareness attributes, {es} considers each attribute
15+
separately when allocating shards.
1916

2017
By default {es} uses <<search-adaptive-replica,adaptive replica selection>>
2118
to route search or GET requests. However, with the presence of allocation awareness

docs/reference/modules/cluster/allocation_filtering.asciidoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ and <<shard-allocation-awareness, allocation awareness>>.
99
Shard allocation filters can be based on custom node attributes or the built-in
1010
`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes.
1111

12-
The `cluster.routing.allocation` settings are dynamic, enabling live indices to
12+
The `cluster.routing.allocation` settings are <<dynamic-cluster-setting,dynamic>>, enabling live indices to
1313
be moved from one set of nodes to another. Shards are only relocated if it is
1414
possible to do so without breaking another routing constraint, such as never
1515
allocating a primary and replica shard on the same node.
@@ -32,17 +32,17 @@ PUT _cluster/settings
3232
===== Cluster routing settings
3333

3434
`cluster.routing.allocation.include.{attribute}`::
35-
35+
(<<dynamic-cluster-setting,Dynamic>>)
3636
Allocate shards to a node whose `{attribute}` has at least one of the
3737
comma-separated values.
3838

3939
`cluster.routing.allocation.require.{attribute}`::
40-
40+
(<<dynamic-cluster-setting,Dynamic>>)
4141
Only allocate shards to a node whose `{attribute}` has _all_ of the
4242
comma-separated values.
4343

4444
`cluster.routing.allocation.exclude.{attribute}`::
45-
45+
(<<dynamic-cluster-setting,Dynamic>>)
4646
Do not allocate shards to a node whose `{attribute}` has _any_ of the
4747
comma-separated values.
4848

docs/reference/modules/cluster/disk_allocator.asciidoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,36 +6,35 @@
66
whether to allocate new shards to that node or to actively relocate shards away
77
from that node.
88

9-
Below are the settings that can be configured in the `elasticsearch.yml` config
10-
file or updated dynamically on a live cluster with the
11-
<<cluster-update-settings,cluster-update-settings>> API:
9+
You can use the following settings to control disk-based allocation:
1210

1311
`cluster.routing.allocation.disk.threshold_enabled`::
14-
12+
(<<dynamic-cluster-setting,Dynamic>>)
1513
Defaults to `true`. Set to `false` to disable the disk allocation decider.
1614

1715
[[cluster-routing-disk-threshold]]
1816
// tag::cluster-routing-disk-threshold-tag[]
1917
`cluster.routing.allocation.disk.threshold_enabled` {ess-icon}::
20-
+
18+
(<<dynamic-cluster-setting,Dynamic>>)
2119
Defaults to `true`. Set to `false` to disable the disk allocation decider.
2220
// end::cluster-routing-disk-threshold-tag[]
2321

2422
[[cluster-routing-watermark-low]]
2523
// tag::cluster-routing-watermark-low-tag[]
2624
`cluster.routing.allocation.disk.watermark.low` {ess-icon}::
27-
+
25+
(<<dynamic-cluster-setting,Dynamic>>)
2826
Controls the low watermark for disk usage. It defaults to `85%`, meaning that {es} will not allocate shards to nodes that have more than 85% disk used. It can also be set to an absolute byte value (like `500mb`) to prevent {es} from allocating shards if less than the specified amount of space is available. This setting has no effect on the primary shards of newly-created indices but will prevent their replicas from being allocated.
2927
// end::cluster-routing-watermark-low-tag[]
3028

3129
[[cluster-routing-watermark-high]]
3230
// tag::cluster-routing-watermark-high-tag[]
3331
`cluster.routing.allocation.disk.watermark.high` {ess-icon}::
34-
+
32+
(<<dynamic-cluster-setting,Dynamic>>)
3533
Controls the high watermark. It defaults to `90%`, meaning that {es} will attempt to relocate shards away from a node whose disk usage is above 90%. It can also be set to an absolute byte value (similarly to the low watermark) to relocate shards away from a node if it has less than the specified amount of free space. This setting affects the allocation of all shards, whether previously allocated or not.
3634
// end::cluster-routing-watermark-high-tag[]
3735

3836
`cluster.routing.allocation.disk.watermark.enable_for_single_data_node`::
37+
(<<static-cluster-setting,Static>>)
3938
For a single data node, the default is to disregard disk watermarks when
4039
making an allocation decision. This is deprecated behavior and will be
4140
changed in 8.0. This setting can be set to `true` to enable the
@@ -46,6 +45,7 @@ Controls the high watermark. It defaults to `90%`, meaning that {es} will attemp
4645
`cluster.routing.allocation.disk.watermark.flood_stage` {ess-icon}::
4746
+
4847
--
48+
(<<dynamic-cluster-setting,Dynamic>>)
4949
Controls the flood stage watermark, which defaults to 95%. {es} enforces a read-only index block (`index.blocks.read_only_allow_delete`) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is a last resort to prevent nodes from running out of disk space. The index block is automatically released when the disk utilization falls below the high watermark.
5050

5151
NOTE: You cannot mix the usage of percentage values and byte values within
@@ -65,7 +65,7 @@ PUT /my-index-000001/_settings
6565
// end::cluster-routing-flood-stage-tag[]
6666

6767
`cluster.info.update.interval`::
68-
68+
(<<dynamic-cluster-setting,Dynamic>>)
6969
How often {es} should check on disk usage for each node in the
7070
cluster. Defaults to `30s`.
7171

docs/reference/modules/cluster/misc.asciidoc

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,16 @@
44
[[cluster-read-only]]
55
===== Metadata
66

7-
An entire cluster may be set to read-only with the following _dynamic_ setting:
7+
An entire cluster may be set to read-only with the following setting:
88

99
`cluster.blocks.read_only`::
10-
10+
(<<dynamic-cluster-setting,Dynamic>>)
1111
Make the whole cluster read only (indices do not accept write
1212
operations), metadata is not allowed to be modified (create or delete
1313
indices).
1414

1515
`cluster.blocks.read_only_allow_delete`::
16-
16+
(<<dynamic-cluster-setting,Dynamic>>)
1717
Identical to `cluster.blocks.read_only` but allows to delete indices
1818
to free up resources.
1919

@@ -51,10 +51,10 @@ including unassigned shards.
5151
For example, an open index with 5 primary shards and 2 replicas counts as 15 shards.
5252
Closed indices do not contribute to the shard count.
5353

54-
You can dynamically adjust the cluster shard limit with the following property:
54+
You can dynamically adjust the cluster shard limit with the following setting:
5555

5656
`cluster.max_shards_per_node`::
57-
57+
(<<dynamic-cluster-setting,Dynamic>>)
5858
Controls the number of shards allowed in the cluster per data node.
5959

6060
With the default setting, a 3-node cluster allows 3,000 shards total, across all open indexes.
@@ -95,10 +95,10 @@ metadata will be viewable by anyone with access to the
9595

9696
The cluster state maintains index tombstones to explicitly denote indices that
9797
have been deleted. The number of tombstones maintained in the cluster state is
98-
controlled by the following property, which cannot be updated dynamically:
98+
controlled by the following setting:
9999

100100
`cluster.indices.tombstones.size`::
101-
101+
(<<static-cluster-setting,Static>>)
102102
Index tombstones prevent nodes that are not part of the cluster when a delete
103103
occurs from joining the cluster and reimporting the index as though the delete
104104
was never issued. To keep the cluster state from growing huge we only keep the
@@ -114,7 +114,7 @@ this situation.
114114
[[cluster-logger]]
115115
===== Logger
116116

117-
The settings which control logging can be updated dynamically with the
117+
The settings which control logging can be updated <<dynamic-cluster-setting,dynamically>> with the
118118
`logger.` prefix. For instance, to increase the logging level of the
119119
`indices.recovery` module to `DEBUG`, issue this request:
120120

@@ -139,12 +139,12 @@ tasks to be revived after a full cluster restart.
139139
Every time a persistent task is created, the master node takes care of
140140
assigning the task to a node of the cluster, and the assigned node will then
141141
pick up the task and execute it locally. The process of assigning persistent
142-
tasks to nodes is controlled by the following properties, which can be updated
143-
dynamically:
142+
tasks to nodes is controlled by the following settings:
144143

145144
`cluster.persistent_tasks.allocation.enable`::
146145
+
147146
--
147+
(<<dynamic-cluster-setting,Dynamic>>)
148148
Enable or disable allocation for persistent tasks:
149149

150150
* `all` - (default) Allows persistent tasks to be assigned to nodes
@@ -156,7 +156,7 @@ left the cluster, for example), are impacted by this setting.
156156
--
157157

158158
`cluster.persistent_tasks.allocation.recheck_interval`::
159-
159+
(<<dynamic-cluster-setting,Dynamic>>)
160160
The master node will automatically check whether persistent tasks need to
161161
be assigned when the cluster state changes significantly. However, there
162162
may be other factors, such as memory usage, that affect whether persistent

docs/reference/modules/cluster/shards_allocation.asciidoc

Lines changed: 15 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,13 @@
11
[[cluster-shard-allocation-settings]]
22
==== Cluster-level shard allocation settings
33

4-
The following _dynamic_ settings may be used to control shard allocation and recovery:
4+
You can use the following settings to control shard allocation and recovery:
55

66
[[cluster-routing-allocation-enable]]
77
`cluster.routing.allocation.enable`::
88
+
99
--
10+
(<<dynamic-cluster-setting,Dynamic>>)
1011
Enable or disable allocation for specific kinds of shards:
1112

1213
* `all` - (default) Allows shard allocation for all kinds of shards.
@@ -22,31 +23,31 @@ one of the active allocation ids in the cluster state.
2223
--
2324

2425
`cluster.routing.allocation.node_concurrent_incoming_recoveries`::
25-
26+
(<<dynamic-cluster-setting,Dynamic>>)
2627
How many concurrent incoming shard recoveries are allowed to happen on a node. Incoming recoveries are the recoveries
2728
where the target shard (most likely the replica unless a shard is relocating) is allocated on the node. Defaults to `2`.
2829

2930
`cluster.routing.allocation.node_concurrent_outgoing_recoveries`::
30-
31+
(<<dynamic-cluster-setting,Dynamic>>)
3132
How many concurrent outgoing shard recoveries are allowed to happen on a node. Outgoing recoveries are the recoveries
3233
where the source shard (most likely the primary unless a shard is relocating) is allocated on the node. Defaults to `2`.
3334

3435
`cluster.routing.allocation.node_concurrent_recoveries`::
35-
36+
(<<dynamic-cluster-setting,Dynamic>>)
3637
A shortcut to set both `cluster.routing.allocation.node_concurrent_incoming_recoveries` and
3738
`cluster.routing.allocation.node_concurrent_outgoing_recoveries`.
3839

3940

4041
`cluster.routing.allocation.node_initial_primaries_recoveries`::
41-
42+
(<<dynamic-cluster-setting,Dynamic>>)
4243
While the recovery of replicas happens over the network, the recovery of
4344
an unassigned primary after node restart uses data from the local disk.
4445
These should be fast so more initial primary recoveries can happen in
4546
parallel on the same node. Defaults to `4`.
4647

4748

4849
`cluster.routing.allocation.same_shard.host`::
49-
50+
(<<dynamic-cluster-setting,Dynamic>>)
5051
Allows to perform a check to prevent allocation of multiple instances of
5152
the same shard on a single host, based on host name and host address.
5253
Defaults to `false`, meaning that no check is performed by default. This
@@ -55,13 +56,14 @@ one of the active allocation ids in the cluster state.
5556
[[shards-rebalancing-settings]]
5657
==== Shard rebalancing settings
5758

58-
The following _dynamic_ settings may be used to control the rebalancing of
59-
shards across the cluster:
59+
You can use the following settings to control the rebalancing of shards across
60+
the cluster:
6061

6162

6263
`cluster.routing.rebalance.enable`::
6364
+
6465
--
66+
(<<dynamic-cluster-setting,Dynamic>>)
6567
Enable or disable rebalancing for specific kinds of shards:
6668

6769
* `all` - (default) Allows shard balancing for all kinds of shards.
@@ -74,6 +76,7 @@ Enable or disable rebalancing for specific kinds of shards:
7476
`cluster.routing.allocation.allow_rebalance`::
7577
+
7678
--
79+
(<<dynamic-cluster-setting,Dynamic>>)
7780
Specify when shard rebalancing is allowed:
7881

7982

@@ -83,7 +86,7 @@ Specify when shard rebalancing is allowed:
8386
--
8487

8588
`cluster.routing.allocation.cluster_concurrent_rebalance`::
86-
89+
(<<dynamic-cluster-setting,Dynamic>>)
8790
Allow to control how many concurrent shard rebalances are
8891
allowed cluster wide. Defaults to `2`. Note that this setting
8992
only controls the number of concurrent shard relocations due
@@ -99,19 +102,20 @@ shard. The cluster is balanced when no allowed rebalancing operation can bring
99102
of any node closer to the weight of any other node by more than the `balance.threshold`.
100103

101104
`cluster.routing.allocation.balance.shard`::
102-
105+
(<<dynamic-cluster-setting,Dynamic>>)
103106
Defines the weight factor for the total number of shards allocated on a node
104107
(float). Defaults to `0.45f`. Raising this raises the tendency to
105108
equalize the number of shards across all nodes in the cluster.
106109

107110
`cluster.routing.allocation.balance.index`::
108-
111+
(<<dynamic-cluster-setting,Dynamic>>)
109112
Defines the weight factor for the number of shards per index allocated
110113
on a specific node (float). Defaults to `0.55f`. Raising this raises the
111114
tendency to equalize the number of shards per index across all nodes in
112115
the cluster.
113116

114117
`cluster.routing.allocation.balance.threshold`::
118+
(<<dynamic-cluster-setting,Dynamic>>)
115119
Minimal optimization value of operations that should be performed (non
116120
negative float). Defaults to `1.0f`. Raising this will cause the cluster
117121
to be less aggressive about optimizing the shard balance.

0 commit comments

Comments
 (0)