Skip to content

Commit 04c792e

Browse files
author
Bob Furu
authored
Merge pull request #27773 from bobfuru/BZ1846458
BZ1846458 - Remove note that ES scaling is not supported
2 parents 3e4ccbe + a1fd2ea commit 04c792e

File tree

2 files changed

+8
-16
lines changed

2 files changed

+8
-16
lines changed

logging/config/cluster-logging-log-store.adoc

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,22 +11,16 @@ You can make modifications to your log store, including:
1111

1212
* storage for your Elasticsearch cluster
1313
* shard replication across data nodes in the cluster, from full replication to no replication
14-
* external access to Elasticsearch data
15-
16-
[NOTE]
17-
====
18-
Scaling down Elasticsearch nodes is not supported. When scaling down, Elasticsearch pods can be accidentally deleted,
19-
possibly resulting in shards not being allocated and replica shards being lost.
20-
====
14+
* external access to Elasticsearch data
2115
2216
//Following paragraph also in modules/cluster-logging-deploy-storage-considerations.adoc
2317

24-
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits,
25-
unless you specify otherwise in the `ClusterLogging` custom resource. The initial set of {product-title} nodes might not be large enough
18+
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits,
19+
unless you specify otherwise in the `ClusterLogging` custom resource. The initial set of {product-title} nodes might not be large enough
2620
to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended
27-
or higher memory.
21+
or higher memory.
2822

29-
Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production environments.
23+
Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.
3024

3125

3226
// The following include statements pull in the module files that comprise
@@ -55,4 +49,3 @@ include::modules/cluster-logging-elasticsearch-persistent-storage-empty.adoc[lev
5549
include::modules/cluster-logging-manual-rollout-rolling.adoc[leveloffset=+1]
5650

5751
include::modules/cluster-logging-elasticsearch-exposing.adoc[leveloffset=+1]
58-

modules/cluster-logging-elasticsearch-scaledown.adoc

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,11 @@
55
[id="cluster-logging-elasticsearch-scaledown_{context}"]
66
= Scaling down Elasticsearch pods
77

8-
Reducing the number of Elasticsearch pod in your cluster can result in data loss or Elasticsearch performance degradation.
8+
Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation.
99

10-
If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to `green`, you can scale down by another pod.
10+
If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to `green`, you can scale down by another pod.
1111

1212
[NOTE]
1313
====
14-
If your Elasticsearch cluster is set to `ZeroRedundancy`, you should not scale down your Elasticsearch pods.
14+
If your Elasticsearch cluster is set to `ZeroRedundancy`, you should not scale down your Elasticsearch pods.
1515
====
16-

0 commit comments

Comments
 (0)