Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions deploy-manage/autoscaling/autoscaling-in-eck.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Elasticsearch autoscaling requires a valid Enterprise license or Enterprise tria
::::


ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes.
ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes. ECK scales Elasticsearch data and machine learning tiers exclusively by scaling storage. CPU and Memory are scaled *relative* to the storage resource min/max settings, and not independently.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ECK scales Elasticsearch data and machine learning tiers exclusively by scaling storage

I don't think this is true, at least for the ML and frozen tiers for which ES returns memory requirements: https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-deciders.html

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @barkbay. I have read through the docs, and I have updated appropriately.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ECK scales Elasticsearch data tiers exclusively by scaling storage.

Maybe I'm missing something but I think this is still not true. The frozen tier is scaled based on both storage and memory requirements:

Frozen shards decider
Estimates required memory capacity based on the number of partially mounted shards. Available for policies governing frozen data nodes.
Frozen storage decider
Estimates required storage capacity as a percentage of the total data set of partially mounted indices. Available for policies governing frozen data nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@barkbay I had noted the Frozen tier at the end of this paragraph, but I've updated this again to try to clarify. If you have a suggestion for making this more clear (maybe a table would help?), I'm up for suggestions.



### Enable autoscaling [k8s-enable]
Expand Down Expand Up @@ -100,7 +100,7 @@ In the case of storage the following restrictions apply:

#### Scale Up and Scale Out [k8s-autoscaling-algorithm]

In order to adapt the resources to the workload, the operator first attempts to scale up the resources (cpu, memory, and storage) allocated to each node in the NodeSets. The operator always ensures that the requested resources are within the limits specified in the autoscaling policy. If each individual node has reached the limits specified in the autoscaling policy, but more resources are required to handle the load, then the operator adds some nodes to the NodeSets. Nodes are added up to the `max` value specified in the `nodeCount` of the policy.
In order to adapt the resources to the workload, the operator first attempts to scale up the resources (storage, with cpu and ram relative to storage) allocated to each node in the NodeSets. The operator always ensures that the requested resources are within the limits specified in the autoscaling policy. If each individual node has reached the limits specified in the autoscaling policy, but more resources are required to handle the load, then the operator adds some nodes to the NodeSets. Nodes are added up to the `max` value specified in the `nodeCount` of the policy.

::::{warning}
Scaling up (vertically) is only supported if the actual storage capacity of the persistent volumes matches the capacity claimed. If the physical capacity of a PersistentVolume may be greater than the capacity claimed in the PersistentVolumeClaim, it is advised to set the same value for the `min` and the `max` setting of each resource. It is however still possible to let the operator scale out the NodeSets automatically, as in the following example:
Expand Down Expand Up @@ -345,4 +345,4 @@ spec:
target:
type: Utilization
averageUtilization: 50
```
```
Loading