Skip to content
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions deploy-manage/autoscaling/autoscaling-in-eck.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Elasticsearch autoscaling requires a valid Enterprise license or Enterprise tria
::::


ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes.
ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes. ECK scales Elasticsearch data tiers (excluding frozen tiers) exclusively by scaling storage. CPU and Memory are scaled *relative* to the storage resource min/max settings, and not independently in data tiers (again excluding frozen tiers). ECK can scale memory and CPU on ML tiers if specified in the `ElasticsearchAutoscaler.spec`. On Frozen tiers ECK can scale memory if specified in the `ElasticsearchAutoscaler.cpu`, but will scale CPU in relation to the storage.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should not consider the resource types returned for each tiers as an implementation detail. It feels like we are duplicating the Elasticsearch documentation which already explains what type of resources are estimated: https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-deciders.html

Instead we could explain how missing resources are calculated by the operator:

Suggested change
ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes. ECK scales Elasticsearch data tiers (excluding frozen tiers) exclusively by scaling storage. CPU and Memory are scaled *relative* to the storage resource min/max settings, and not independently in data tiers (again excluding frozen tiers). ECK can scale memory and CPU on ML tiers if specified in the `ElasticsearchAutoscaler.spec`. On Frozen tiers ECK can scale memory if specified in the `ElasticsearchAutoscaler.cpu`, but will scale CPU in relation to the storage.
ECK can leverage the [autoscaling API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-autoscaling) introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch [data tiers](/manage-data/lifecycle/data-tiers.md) and machine learning nodes. Required resources for each tiers are estimated by [Elasticsearch deciders](https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-deciders.html). Deciders may return required CPU, memory or storage capacity. If a resource type is missing in the decider's output, it is inferred relative to the others. For example, if a decider does not return a memory requirement, then memory is calculated proportionally to the required amount of storage returned by the decider. The same goes for CPU which is inferred from memory if it is absent from the decider's result.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can do what @barkbay suggests. But I also think we should call out in very simple language what is actually supported or not supported today. I know this would be duplicating some of the content from the Elasticsearch docs on deciders but it can be a bit confusing to read the decider docs. Do all of them apply, which ones do not apply?

ECK can scale memory and CPU on ML tiers if specified in the ElasticsearchAutoscaler.spec. On Frozen tiers ECK can scale memory if specified in the ElasticsearchAutoscaler.cpu

I am struggeling to parse this wording. What are we trying to say here? Why can ECK scale memory when you specify what? What is ElasticsearchAutoscaler.cpu?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a typo. Intended to be ElasticsearchAutoscaler.spec. ECK can scale memory in frozen tiers according to what's returned by the ES deciders if specified, otherwise it will scale it in relation to storage.

This isn't the most straight-forward thing to understand from a customer standpoint, as each tier has it's own set of supported options. Would a table showing the available options for each tier be a more clear that the words we're suggesting @barkbay @pebrc ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the lag in answering, not sure myself what would be the best option. I tend to think that https://www.elastic.co/guide/en/elasticsearch/reference/current/autoscaling-deciders.html should be improved as I think most of the readers are interested in which resources are estimated for each tier, not really about a list of the available deciders which should be an implementation detail.

We can use a table, maybe something along the lines of:

Storage Memory CPU
Data Nodes (except Frozen) Yes Calculated proportionally to the required amount of storage Calculated proportionally to the required amount of memory
Frozen Nodes Yes Yes Calculated proportionally to the required amount of memory
Machine Learning No Yes Calculated proportionally to the required amount of memory

As a side note I just realized that https://www.elastic.co/docs/deploy-manage/autoscaling/autoscaling-in-ece-and-ech does not mention the frozen tier case, so maybe you were right in the beginning and It's okay not to be that specific 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally think the table format is much clearer to understand as opposed to reading a wall of text. I'll update this updated and we can review further. ty!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think this is much more clear:

image



### Enable autoscaling [k8s-enable]
Expand Down Expand Up @@ -345,4 +345,4 @@ spec:
target:
type: Utilization
averageUtilization: 50
```
```
Loading