You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added additional entries for troubleshooting unhealthy cluster
Reordered "Re-enable shard allocation" because not as common as other causes
Added additional causes of yellow statuses
Changed watermark commadn to include high and low watermark so users can make their cluster operate once again.
{es} will never assign a replica to the same node as the primary shard. If you only have one node it is expected for your cluster to indicate yellow. If you prefer it to be green, then change the <<dynamic-index-number-of-replicas,num_of_replicas>> on each index to be 0.
84
81
85
-
[source,console]
86
-
----
87
-
PUT _cluster/settings
88
-
{
89
-
"persistent" : {
90
-
"cluster.routing.allocation.enable" : null
91
-
}
92
-
}
93
-
----
94
-
95
-
See https://www.youtube.com/watch?v=MiKKUdZvwnI[this video] for walkthrough of troubleshooting "no allocations are allowed".
82
+
Similarly if the number of replicas is equal to or exceeds the number of nodes, then it will not be possible to allocate one or more of the shards for the same reason.
96
83
97
84
[discrete]
98
85
[[fix-cluster-status-recover-nodes]]
99
86
===== Recover lost nodes
100
87
101
88
Shards often become unassigned when a data node leaves the cluster. This can
102
-
occur for several reasons, ranging from connectivity issues to hardware failure.
89
+
occur for several reasons.
90
+
91
+
* If you manually restart a node, then it will temporarily cause an unhealthy cluster until the node has recovered.
92
+
93
+
* If you have a node that is overloaded or has stopped operating for any reason, then it will temporarily cause an unhealthy cluster. Nodes may disconnect because of prolonged garbage collection (GC) pauses, which can result from "out of memory" errors or high memory usage due to intensive search operations. See <<fix-cluster-status-jvm,Reduce JVM memory pressure>> for more JVM related issues.
94
+
95
+
* If nodes cannot reliably communicate due to networking issues, they may lose contact with one another. This can cause shards to become out of sync. You can often identify this issue by checking the logs for repeated messages about nodes leaving and rejoining the cluster.
96
+
103
97
After you resolve the issue and recover the node, it will rejoin the cluster.
104
98
{es} will then automatically allocate any unassigned shards.
105
99
100
+
You can monitor this process by <<cluster-health,checking your cluster health>>. You will see that the number of unallocated shards progressively reduces until green status is reached.
101
+
106
102
To avoid wasting resources on temporary issues, {es} <<delayed-allocation,delays
107
103
allocation>> by one minute by default. If you've recovered a node and don’t want
108
104
to wait for the delay period, you can call the <<cluster-reroute,cluster reroute
@@ -151,7 +147,7 @@ replica, it remains unassigned. To fix this, you can:
151
147
152
148
* Change the `index.number_of_replicas` index setting to reduce the number of
153
149
replicas for each primary shard. We recommend keeping at least one replica per
154
-
primary.
150
+
primary for high availability.
155
151
156
152
[source,console]
157
153
----
@@ -162,7 +158,6 @@ PUT _settings
162
158
----
163
159
// TEST[s/^/PUT my-index\n/]
164
160
165
-
166
161
[discrete]
167
162
[[fix-cluster-status-disk-space]]
168
163
===== Free up or increase disk space
@@ -183,6 +178,8 @@ If your nodes are running low on disk space, you have a few options:
183
178
184
179
* Upgrade your nodes to increase disk space.
185
180
181
+
* Add more nodes to the cluster.
182
+
186
183
* Delete unneeded indices to free up space. If you use {ilm-init}, you can
187
184
update your lifecycle policy to use <<ilm-searchable-snapshot,searchable
188
185
snapshots>> or add a delete phase. If you no longer need to search the data, you
@@ -215,11 +212,34 @@ watermark or set it to an explicit byte value.
0 commit comments