Skip to content

Commit 06e83b1

Browse files
remove anchors
1 parent 5c823cf commit 06e83b1

File tree

2 files changed

+5
-8
lines changed

2 files changed

+5
-8
lines changed

deploy-manage/autoscaling.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ You can also have a look at our [autoscaling example](./autoscaling/ece-autoscal
5959
::::
6060

6161
## Overview [ec-autoscaling-intro]
62-
$$$ece-autoscaling-intro$$$$$$ech-autoscaling-intro$$$When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention.
62+
When you first create a deployment it can be challenging to determine the amount of storage your data nodes will require. The same is relevant for the amount of memory and CPU that you want to allocate to your machine learning nodes. It can become even more challenging to predict these requirements for weeks or months into the future. In an ideal scenario, these resources should be sized to both ensure efficient performance and resiliency, and to avoid excess costs. Autoscaling can help with this balance by adjusting the resources available to a deployment automatically as loads change over time, reducing the need for monitoring and manual intervention.
6363

6464
::::{note}
6565
Autoscaling is enabled for the Machine Learning tier by default for new deployments.
@@ -88,7 +88,7 @@ For any Elasticsearch component the number of availability zones is not affected
8888

8989
## When does autoscaling occur?[ec-autoscaling-factors]
9090

91-
$$$ece-autoscaling-factors$$$$$$ech-autoscaling-factors$$$Several factors determine when data tiers or machine learning nodes are scaled.
91+
Several factors determine when data tiers or machine learning nodes are scaled.
9292

9393
For a data tier, an autoscaling event can be triggered in the following cases:
9494

@@ -103,14 +103,13 @@ On machine learning nodes, scaling is determined by an estimate of the memory an
103103
On a highly available deployment, autoscaling events are always applied to instances in each availability zone simultaneously, to ensure consistency.
104104

105105
## Notifications[ec-autoscaling-notifications]
106-
$$$ece-autoscaling-notifications$$$$$$ech-autoscaling-notifications$$$
107106
In the event that a data tier or machine learning node scales up to its maximum possible size, you’ll receive an email, and a notice also appears on the deployment overview page prompting you to adjust your autoscaling settings to ensure optimal performance.
108107

109108
In {{ece}} deployments, a warning is also issued in the ECE `service-constructor` logs with the field `labels.autoscaling_notification_type` and a value of `data-tier-at-limit` (for a fully scaled data tier) or `ml-tier-at-limit` (for a fully scaled machine learning node). The warning is indexed in the `logging-and-metrics` deployment, so you can use that event to [configure an email notification](../explore-analyze/alerts-cases/watcher.md).
110109

111110
## Restrictions and limitations[ec-autoscaling-restrictions]
112111

113-
$$$ece-autoscaling-restrictions$$$$$$ech-autoscaling-restrictions$$$The following are known limitations and restrictions with autoscaling:
112+
The following are known limitations and restrictions with autoscaling:
114113

115114
* Autoscaling will not run if the cluster is unhealthy or if the last Elasticsearch plan failed.
116115

@@ -125,7 +124,7 @@ In {{ece}}, the following additional limitations apply:
125124

126125
## Enable or disable autoscaling[ec-autoscaling-enable]
127126

128-
$$$ece-autoscaling-enable$$$$$$ech-autoscaling-enable$$$To enable or disable autoscaling on a deployment:
127+
To enable or disable autoscaling on a deployment:
129128

130129
1. Log in to the ECE [Cloud UI]((/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md) or [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body).
131130

@@ -144,7 +143,7 @@ When autoscaling has been disabled, you need to adjust the size of data tiers an
144143

145144
## Update your autoscaling settings[ec-autoscaling-update]
146145

147-
$$$ece-autoscaling-update$$$$$$ech-autoscaling-update$$$Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows:
146+
Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows:
148147

149148
1. **Log in** to the console.
150149

deploy-manage/autoscaling/trained-model-autoscaling.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,6 @@ Trained model autoscaling is available for both serverless and Cloud deployments
2525
Security and Observability projects are only charged for data ingestion and retention. They are not charged for processing power (VCU usage), which is used for more complex operations, like running advanced search models. For example, in Search projects, models such as ELSER require significant processing power to provide more accurate search results.
2626

2727
## Enabling autoscaling through APIs - adaptive allocations [enabling-autoscaling-through-apis-adaptive-allocations]
28-
$$$nlp-model-adaptive-allocations$$$
2928

3029
Model allocations are independent units of work for NLP tasks. If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.)
3130

@@ -54,7 +53,6 @@ You can optimize your model deployment for typical use cases, such as search and
5453
* If you want to optimize for search, set the number of threads to greater than `1`. Increasing the number of threads will make the search processes more performant.
5554

5655
## Enabling autoscaling in {{kib}} - adaptive resources [enabling-autoscaling-in-kibana-adaptive-resources]
57-
$$$nlp-model-adaptive-resources$$$
5856

5957
You can enable adaptive resources for your models when starting or updating the model deployment. Adaptive resources make it possible for {{es}} to scale up or down the available resources based on the load on the process. This can help you to manage performance and cost more easily. When adaptive resources are enabled, the number of vCPUs that the model deployment uses is set automatically based on the current load. When the load is high, the number of vCPUs that the process can use is automatically increased. When the load is low, the number of vCPUs that the process can use is automatically decreased.
6058

0 commit comments

Comments
 (0)