Skip to content

Commit b7ff7ef

Browse files
authored
Tidying up more applies_to tags in the Troubleshooting section (#4473)
Part of #4117 <!-- Thank you for contributing to the Elastic Docs! 🎉 Use this template to help us efficiently review your contribution. --> ## Summary <!-- Describe what your PR changes or improves. If your PR fixes an issue, link it here. If your PR does not fix an issue, describe the reason you are making the change. --> ## Generative AI disclosure <!-- To help us ensure compliance with the Elastic open source and documentation guidelines, please answer the following: --> 1. Did you use a generative AI (GenAI) tool to assist in creating this contribution? - [ ] Yes - [x] No <!-- 2. If you answered "Yes" to the previous question, please specify the tool(s) and model(s) used (e.g., Google Gemini, OpenAI ChatGPT-4, etc.). Tool(s) and model(s) used: -->
1 parent c4e5967 commit b7ff7ef

File tree

4 files changed

+355
-312
lines changed

4 files changed

+355
-312
lines changed

troubleshoot/elasticsearch/increase-capacity-data-node.md

Lines changed: 124 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -4,65 +4,33 @@ mapped_pages:
44
- https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-capacity-data-node.html
55
applies_to:
66
stack:
7-
deployment:
8-
eck:
9-
ess:
10-
ece:
11-
self:
127
products:
138
- id: elasticsearch
149
---
1510

1611
# Increase the disk capacity of data nodes [increase-capacity-data-node]
1712

18-
:::::::{tab-set}
13+
Disk capacity pressures may cause index failures, unassigned shards, and cluster instability.
1914

20-
::::::{tab-item} {{ech}}
21-
In order to increase the disk capacity of the data nodes in your cluster:
15+
{{es}} uses [disk-based shard allocation watermarks](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation) to manage disk space on nodes, which can block allocation or indexing when nodes run low on disk space. Refer to [](/troubleshoot/elasticsearch/fix-watermark-errors.md) for additional details on how to address this situation.
2216

23-
1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body).
24-
2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment.
25-
3. If autoscaling is available but not enabled, enable it. You can do this by clicking the button `Enable autoscaling` on a banner like the one below:
17+
To increase the disk capacity of the data nodes in your cluster, complete these steps:
2618

27-
:::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_banner.png
28-
:alt: Autoscaling banner
29-
:screenshot:
30-
:::
19+
1. [Estimate how much disk capacity you need](#estimate-required-capacity).
20+
1. [Increase the disk capacity](#increase-disk-capacity-of-data-nodes).
3121

32-
Or you can go to `Actions > Edit deployment`, check the checkbox `Autoscale` and click `save` at the bottom of the page.
3322

34-
:::{image} /troubleshoot/images/elasticsearch-reference-enable_autoscaling.png
35-
:alt: Enabling autoscaling
36-
:screenshot:
37-
:::
23+
## Estimate the amount of required disk capacity [estimate-required-capacity]
3824

39-
4. If autoscaling has succeeded the cluster should return to `healthy` status. If the cluster is still out of disk, check if autoscaling has reached its limits. You will be notified about this by the following banner:
25+
The following steps explain how to retrieve the current disk watermark configuration of the cluster and how to check the current disk usage on the nodes.
4026

41-
:::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png
42-
:alt: Autoscaling banner
43-
:screenshot:
44-
:::
45-
46-
or you can go to `Actions > Edit deployment` and look for the label `LIMIT REACHED` as shown below:
47-
48-
:::{image} /troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png
49-
:alt: Autoscaling limits reached
50-
:screenshot:
51-
:::
52-
53-
If you are seeing the banner click `Update autoscaling settings` to go to the `Edit` page. Otherwise, you are already in the `Edit` page, click `Edit settings` to increase the autoscaling limits. After you perform the change click `save` at the bottom of the page.
54-
::::::
55-
56-
::::::{tab-item} Self-managed
57-
In order to increase the data node capacity in your cluster, you will need to calculate the amount of extra disk space needed.
58-
59-
1. First, retrieve the relevant disk thresholds that will indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark:
27+
1. Retrieve the relevant disk thresholds that indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so only the high watermark is retrieved:
6028

6129
```console
6230
GET _cluster/settings?include_defaults&filter_path=*.cluster.routing.allocation.disk.watermark.high*
6331
```
6432

65-
The response will look like this:
33+
The response looks like this:
6634

6735
```console-result
6836
{
@@ -83,33 +51,138 @@ In order to increase the data node capacity in your cluster, you will need to ca
8351
}
8452
```
8553

86-
The above means that in order to resolve the disk shortage we need to either drop our disk usage below the 90% or have more than 150GB available, read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high).
54+
The above means that in order to resolve the disk shortage, disk usage must drop below the 90% or have more than 150GB available. Read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high).
8755

88-
2. The next step is to find out the current disk usage, this will indicate how much extra space is needed. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold.
56+
1. Find the current disk usage, which in turn indicates how much extra space is required. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold.
8957

9058
```console
9159
GET _cat/allocation?v&s=disk.avail&h=node,disk.percent,disk.avail,disk.total,disk.used,disk.indices,shards
9260
```
9361

94-
The response will look like this:
62+
The response looks like this:
9563

9664
```console-result
9765
node disk.percent disk.avail disk.total disk.used disk.indices shards
9866
instance-0000000000 91 4.6gb 35gb 31.1gb 29.9gb 111
9967
```
10068

101-
3. The high watermark configuration indicates that the disk usage needs to drop below 90%. To achieve this, 2 things are possible:
69+
In this scenario, the high watermark configuration indicates that the disk usage needs to drop below 90%, while the current disk usage is 91%.
70+
71+
72+
## Increase the disk capacity of your data nodes [increase-disk-capacity-of-data-nodes]
73+
74+
Here are the most common ways to increase disk capacity:
75+
76+
* You can expand the disk space of the existing nodes. This is typically achieved by replacing your nodes with ones with higher capacity.
77+
* You can add additional data nodes to the data tier that is short of disk space, increasing the overall capacity of that tier and potentially improving performance by distributing data and workload across more resources.
78+
79+
When you add another data node, the cluster doesn't recover immediately and it might take some time until shards are relocated to the new node.
80+
You can check the progress with the following API call:
81+
82+
```console
83+
GET /_cat/shards?v&h=state,node&s=state
84+
```
85+
86+
If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED`.
87+
88+
:::::::{applies-switch}
89+
90+
::::::{applies-item} { ess:, ece: }
91+
92+
:::{warning}
93+
:applies_to: ece:
94+
In ECE, resizing is limited by your [allocator capacity](/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md).
95+
:::
96+
97+
To increase the disk capacity of the data nodes in your cluster:
10298

103-
* to add an extra data node to the cluster (this requires that you have more than one shard in your cluster), or
104-
* to extend the disk space of the current node by approximately 20% to allow this node to drop to 70%. This will give enough space to this node to not run out of space soon.
99+
1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body) or ECE Cloud UI.
100+
1. On the home page, find your deployment and select **Manage**.
101+
1. Go to **Actions** > **Edit deployment** and check that autoscaling is enabled. Adjust the **Enable Autoscaling for** dropdown menu as needed and select **Save**.
102+
1. If autoscaling is successful, the cluster returns to a `healthy` status.
103+
If the cluster is still out of disk, check if autoscaling has reached its set limits and [update your autoscaling settings](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-update).
105104

106-
4. In the case of adding another data node, the cluster will not recover immediately. It might take some time to relocate some shards to the new node. You can check the progress here:
105+
You can also add more capacity by adding more nodes to your cluster and targeting the data tier that may be short of disk. For more information, refer to [](/troubleshoot/elasticsearch/add-tier.md).
106+
107+
::::::
108+
109+
::::::{applies-item} { self: }
110+
To increase the data node capacity in your cluster, you can [add more nodes](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) to the cluster, or increase the disk capacity of existing nodes. Disk expansion procedures depend on your operating system and storage infrastructure and are outside the scope of Elastic support. In practice, this is often achieved by [removing a node from the cluster](https://www.elastic.co/search-labs/blog/elasticsearch-remove-node) and reinstalling it with a larger disk.
111+
112+
::::::
113+
114+
::::::{applies-item} { eck: }
115+
To increase the disk capacity of data nodes in your {{eck}} cluster, you can either add more data nodes or increase the storage size of existing nodes.
116+
117+
**Option 1: Add more data nodes**
118+
119+
1. Update the `count` field in your data node NodeSet to add more nodes:
120+
121+
```yaml subs=true
122+
apiVersion: elasticsearch.k8s.elastic.co/v1
123+
kind: Elasticsearch
124+
metadata:
125+
name: quickstart
126+
spec:
127+
version: {{version.stack}}
128+
nodeSets:
129+
- name: data-nodes
130+
count: 5 # Increase from previous count
131+
config:
132+
node.roles: ["data"]
133+
volumeClaimTemplates:
134+
- metadata:
135+
name: elasticsearch-data
136+
spec:
137+
accessModes:
138+
- ReadWriteOnce
139+
resources:
140+
requests:
141+
storage: 100Gi
142+
```
143+
144+
1. Apply the changes:
145+
146+
```sh
147+
kubectl apply -f your-elasticsearch-manifest.yaml
148+
```
149+
150+
ECK automatically creates the new nodes and {{es}} will relocate shards to balance the load. You can monitor the progress using:
107151

108152
```console
109153
GET /_cat/shards?v&h=state,node&s=state
110154
```
111155

112-
If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED` or until the health disk indicator turns to `green`.
113-
::::::
156+
**Option 2: Increase storage size of existing nodes**
157+
158+
1. If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`:
159+
160+
```yaml subs=true
161+
apiVersion: elasticsearch.k8s.elastic.co/v1
162+
kind: Elasticsearch
163+
metadata:
164+
name: quickstart
165+
spec:
166+
version: {{version.stack}}
167+
nodeSets:
168+
- name: data-nodes
169+
count: 3
170+
config:
171+
node.roles: ["data"]
172+
volumeClaimTemplates:
173+
- metadata:
174+
name: elasticsearch-data
175+
spec:
176+
accessModes:
177+
- ReadWriteOnce
178+
resources:
179+
requests:
180+
storage: 200Gi # Increased from previous size
181+
```
182+
183+
1. Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting {{es}}. Otherwise, you may need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem.
114184

115-
:::::::
185+
For more information, refer to [](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md) and [](/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md).
186+
187+
::::::
188+
:::::::

0 commit comments

Comments
 (0)